Hacker News new | past | comments | ask | show | jobs | submit login
Why doesn't TypeScript properly type Object.keys? (alexharri.com)
235 points by alexharri on June 24, 2023 | hide | past | favorite | 154 comments



At Notion, we re-type Object.keys and Object.entries for the situations where you want more specific key types. For example, iterating over a `Record<ThingyIdType, ThingyUpdate>` while preserving the well-typed ID key, or iterating over a `SchemaDefinition<Thingy>` which is known to be statically declared as an exact object.

    /**
     * Like Object.keys, but unsound in exchange for more convenience.
     *
     * Casts the result of Object.keys to the known keys of an object type,
     * even though JavaScript objects may contain additional keys.
     *
     * Only use this function when you know/control the provenance of the object
     * you're iterating, and can verify it contains exactly the keys declared
     * to the type system.
     *
     * Example:
     * ```
     * const o = {x: "ok", y: 10}
     * o["z"] = "UNTRACKED_KEY"
     * const safeKeys = Object.keys(o)
     * const unsafeKeys = objectKeys(o)
     * ```
     * => const safeKeys: string[]
     * => const unsafeKeys: ("x" | "y")[] // Missing "z"
     */
    export const objectKeys = Object.keys as <T>(obj: T) => Array<keyof T>
    
    
    /**
     * The type of a single item in `Object.entries<T>(value: T)`.
     *
     * Example:
     * ```
     * interface T {x: string; y: number}
     * type T2 = ObjectEntry<T>
     * ```
     * => type T2 = ["x", string] | ["y", number]
     */
    export type ObjectEntry<T> = {
     // Without Exclude<keyof T, undefined>, this type produces `ExpectedEntries | undefined`
     // if T has any optional keys.
     [K in Exclude<keyof T, undefined>]: [K, T[K]]
    }[Exclude<keyof T, undefined>]
    
    /**
     * Like Object.entries, but returns a more specific type which can be less safe.
     *
     * Example:
     * ```
     * const o = {x: "ok", y: 10}
     * const unsafeEntries = Object.entries(o)
     * const safeEntries = objectEntries(o)
     * ```
     * => const unsafeEntries: [string, string | number][]
     * => const safeEntries: ObjectEntry<{
     *   x: string;
     *   y: number;
     * }>[]
     *
     * See `ObjectEntry` above.
     *
     * Note that Object.entries collapses all possible values into a single union
     * while objectEntries results in a union of 2-tuples.
     */
    export const objectEntries = Object.entries as <T>(
     o: T
    ) => Array<ObjectEntry<T>>


I like this but I think it's generally preferable to have something like "unsafe" in the name, e.g. objectKeysUnsafe.

Reason being it's very easy for a new developer to come along and see "objectKeys" being used all over the place, and then assume it's safe to use everywhere without realizing the problem it could cause. Putting "unsafe" in the name (unsafe actually feels a little strong for me, maybe something like "lax" or something else would be better?) makes it a lot more likely that a new dev is at least going to look up the doc on the function to see why it's unsafe.


I think the unsoundness hazard here is more scary in NPM library code where you can't trust your caller too much. In our application, we generally don't create objects with extraneous properties or use subtyping/inheritance.

I did a quick survey of the 316 occurrences of `objectEntries` in our codebase and found very few (like 15-ish) that weren't obviously on a Record type like `Record<ThingyID, Thingy>` or a static, constant configuration object like `{ [T in Foo["type"]]: HandleFooVariant<T> }`.


At my company we just have a wrapper around the Object.keys function (and other similar functions) that provide a more specific type in case you are sure it won’t be a problem (which we generally are for objects we’ve created ourselves).


Notion would be a useful at-scale codebase for a case-study:

- How often are you forced to interact with JS objects over ECMA6 Maps?

- How often are you not forced to use JS objects over ECMA6 Maps, but are using them anyway?


ES6 maps don't work well with mapped types where we encode a type-level relationship between a key and a value depending on the key type:

    type BlockRenderers<Out> = {
     [key in keyof BlockValues]: (
      args: BlockRenderArgs<Out, BlockValues[key]>
     ) => Out | undefined
    }
or something like:

    type Table = 'block' /* | ... ~70 other table names */
    type ID<T extends Table> = `${T}:${UUID}`
    
    type TableToRow = {
      block: BlockValue
      // ... ~70 other table schemas
    }
    
    type RowsById = { [T in Table as ID<T>]: TableToRow[T] }
Maps are also annoying to persist compared Record<K, V> types which just work with JSON. The majority of our code is oriented around JSON-serializable discriminated union types. In my benchmarking, transcoding to/from Map<K, V> to Record<K, V> doesn't seem worth the effort.

So, we Map for in-memory only data organization and some framework stuff but most engineers tend to interact much more with persisted data and thus Record types.


Does it catch if you attempt to use a more complex type as a key? (In which case, you should use a Map instead of an object.) I wonder if you should be using `T extends string | number`


These helpers just return existing information about an object, they don't wrap the object passed in. The generic gets inferred from the argument. It's not possible to pass in an object where 'keyof T' is anything other than string, number, and/or literal strings or numbers.


Mapping/looping over Object.entries is great. But you really should only be doing that with Record<>, not some random object. A record is like a container, such as an array, than a object with properties that might have different types.


Nice explanation!

Although I’m surprised there is no mention of the best solution for iterating both keys and values:

for (const [key, val] of Object.entries(obj)) { console.log(key, val) }

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Note that Object.entries also returns keys as strings (and not keyof T), just like Object.keys does. It has the exact same problem and rationale as explained in this article. This means that if you just want to loop, you’re good, but the moment you need something fancier than a console.log you might need the keys to be typed stronger again.


In many cases though, you only need a correctly-typed key in order to index into the object to get the value -- if you've already got the value, you're fine. And if you need to limit yourself to keys that are defined on the type you're using (rather than arbitrary keys which might exist on the object at runtime) then you need to check anyway.


Object.entries has the same problem: https://stackoverflow.com/q/60141960


It has some of the same problems. It doesn't have the problem of being unable to read obj[key] because typescript doesn't realize it's the same object.


It has that same problem actually:

https://news.ycombinator.com/item?id=36458300

And according to the article, the issue isn't that TS doesn't realize it's the same object, it's that the object might have more keys than what is declared in your interface.

Perhaps what you meant to say is that you can directly use `val` instead of `obj[key]`.


> And according to the article, the issue isn't that TS doesn't realize it's the same object, it's that the object might have more keys than what is declared in your interface.

The main issue at hand was inability to use `options[key]`.

"the object might have more keys" was a possible issue, and was the reason typescript was blocking access to `options[key]`, but it wasn't the main problem.

> It has that same problem actually:

> Perhaps what you meant to say is that you can directly use `val` instead of `obj[key]`.

How is that different from what I said?

Because you can directly use `val`, and because that's better than `obj[key]` anyway, you "don't have the problem of being unable to read obj[key]".


`Object.entries` and `Object.values` are unsound[0] because they aren't typed as defensively as `Object.keys`. And that seems unlikely to change[1].

[0]: https://tsplay.dev/N7zKRw

[1]: https://github.com/microsoft/TypeScript/issues/38520


Great, neat way!


I tend to try to avoid for loops unless absolutely necessary. I prefer to express operations as various combinations of map, reduce, filter, etc.


I tend to avoid map and filter unless the code I'm writing can be naturally expressed using them. I almost never use reduce. Sum of a list is one of the few reasonable use cases. I've seen some horrific reduce code that's working very hard to avoid an explicit loop.

If I need to count occurrences of each character in a string, I'm using a loop.


Is there a functional rationale for this or more of style thing? I almost never use reduce but I use map and filter (and forEach) everywhere. The syntax of for loops just seems so dirty to me but maybe I just stylistically like the functional / immutable approach to architecting code which is why I use Ramda.


input.split('').filter((c) => c === chr).length;

Edit: ah I assume you mean more like a histogram. Yeah, that's going to be less clean, e.g.

input.split('').reduce((acc, el) => ({...acc, [el]: ( acc[el] ?? 0 ) + 1 }), {});


This is exactly the type of code I'm talking about. That spread operator is producing a lot of garbage (in the GC sense) for no real reason. It's just to avoid mutation and a loop.


So, the idea was to make every function's arguments covariant, instead of having to write it explicitly. `KeyboardEvent` is assignable to `KeyboardShortcutEvent` directly, without saying that you actually are ready to receive `<T extends KeyboardShortcutEvent>` or something even more elaborate.

It's a design decision which is a tradeoff, like most design decisions.

I won't mind to have instead a more succinct syntax for covariance, like `event: KeyboardShortcutEvent+`, maybe along with a similar syntax for contravariance (fewer properties).

OTOH declaring a well-typed helper function and instead of `Object.keys(x)` write some `TypedObject.keys(x)` does not seem hard to me.


Flow has exact object types as dedicated annotation/kind.

This problem is much bigger. You can for example easily imagine api where you provide where clause that is converted to sql form. You may declare type to expect only certain parameters, but it won't help you at the type level as anything extra can be passed. This can be abused to extract some sensitive information.

You're forced to emulate exactness by always destructuring objects - not great.


  You're forced to emulate exactness by always destructuring objects - not great.
It’s almost like using objects as enumerable maps is an anti-pattern.


The problem has nothing to do with objects. The problem is, how do you type check something like sprintf without ad hoc type rules?


Typescript can check sprintf though using template string types: https://www.hacklewayne.com/a-truly-strongly-typed-printf-in...


Meanwhile, both Rust [1] and Haskell [2] manage to implement statically type-safe string interpolation.

[1]: https://willcrichton.net/notes/type-safe-printf/

[2]: https://hackage.haskell.org/package/formatting


Yes but the Rust example is an ad hoc type rule implemented behind the macro. You can make it type-safe but you lose the ability to have a formatting language in the string itself.


Rust has explicit support for it in the compiler, which is not great.

Zig does it the right way - it's defined in zig itself, no special cases in compiler like in Rust.


I don’t think I’m following well enough to provide a meaningful response.

This is not meant as an argument against what you’re saying, because I know you were just giving an example, but I found this and thought you may find it interesting: https://www.hacklewayne.com/a-truly-strongly-typed-printf-in...


To extend on it a bit, flow has really good story with:

1. nominal types for classes (with correct variance/liskov substitution principles)

2. structural types for the rest

3. explicit annotation for exact structural types

This setup is brilliant.


There's an old issue open in TypeScript for _exact_ types, which might help in some cases here: https://github.com/microsoft/TypeScript/issues/12936

But there are some problems with that approach, mainly that exact types would be somewhat infectious: A regular type is no longer assignable to its exact type, ie `User` is not assignable to `exact User`, so if you want to use an exact type in a function, you must accept the exact type as a parameter, which spreads to the whole call stack where that parameter is passed. Then union types don't quite work as expected anymore either.

I'm pretty happy with the status quo: Object.keys() is unsafe, so either cast your way around it and let the cast be the signal that you're doing something unsafe, or handle unexpected keys explicitly.


Having a generic, easy way to strip off the extra attributes (immutably), that wouldn’t be a problem. Making up an api: Object.exact<T>(obj: T): exact T, eg Object.exact<User>(user)

It’s not possible at the moment as all type information is erased at runtime of course.


I work around the issue of `Object.keys` being unsafe by having a function which takes a list of keys and verifies at compile time that it contains all of the known keys on the object. Excess keys won't be iterated over, at runtime, but I consider this to be a boon.

https://tsplay.dev/W47y1m


I use type guards and the `io-ts` package to get both compile-time and runtime type checking:

    import * as t from 'io-ts';

    /* -------- Runtime type definitions -------- */

    export const FooSchema = t.type({
      someStringKey: t.string,
      someNumericKey: t.number,
      someLiteralKey: t.union([t.literal(1), t.literal(8675309)])
      someOtherType: t.type({
        anotherNumber: t.number,
        yetAnotherType: BarSchema
      })
    })

    export const BarSchema = t.type({
      ...
    })

    /* ---- Run-of-the-mill Typescript types ---- */

    export type Foo = t.Typeof<typeof FooSchema>;
    export type Bar = t.Typeof<typeof BarSchema>;

    // Static usage
    export const Baz: Foo = { someStringKey: 5 } // Compile error! Missing keys/etc

    // Runtime usage
    export const isFooType = (input: any): input is Foo =>
      FooSchema.validate(input)

    if (!isFooType(someUserInput)) return server.send(404);
You can enforce exact type shape, or just ignore excess keys, etc. It's pretty handy, and saves me so much work maintaining two type systems, a static one for development and a runtime one for production.

All of the typical caveats apply: consider runtime performance cost, recognize that deep/nuanced types will require a deeper/nuanced understanding of io-ts, there are limitations, alternative libraries with different tradeoffs, and so on.

I like it. :)


Neat, I hadn’t thought about the covariance/contravariance implications of structural typing when getting irritated by this shortcoming yesterday! Small nitpick, the `KeyboardShortcutEvent` snippet at the end would be a great place to showcase `Pick<Foo, “bar”|”baz”>`


I thought of using Partial<T> for the last example, as well. Obviously this has its own implications; right tool for the job and all that.


In the keyboard event example of only specifying the properties you want to test against, it would be better to use `Pick` to create a relational type off the original. That way if the property or types change (unlikely in this case), your type system will remain up-to-date.


Yep, this highlights imo the real value of TS, not type safety for data integrity but instead for ease of development and refactoring


Well, there should just be a function `Type.keys` that accepts a type and, at the typelevel/compiletime, finds the keys and returns only those that the type we are using has, not the underlying object at runtime.

Of course, that only works if we know the type statically just like in the example of the post. But that is exactly the usecase that is needed in this example and then the type-errors are gone.


Runtime type information is against the goals of TypeScript.

https://github.com/Microsoft/TypeScript/wiki/TypeScript-Desi...

Edit:

  Non-goals:
  …
  5. Add or rely on run-time type information in programs, or emit different code based on the results of the type system. Instead, encourage programming patterns that do not require run-time metadata.


This is my biggest issue with the language.

Fetch returns “any” meaning you can’t trust the data you received is actually the data you expected. Bugs from this mismatch will be many lines away (on first use) and more difficult to find. Because of this “goal of the language” you cited, there’s no built-in way to validate any data at runtime. In nearly any other typed language I have some deserialization mechanism. Not so in Typescript!

This decision led to more bugs in our codebase than any other. The compiler actively lies to you about the types you’ll have at runtime. The only solutions are codegen or writing validators to poorly approximate what Typescript should give us for free.


Yes, “any” is a wart. And it’s a bad one.

The correct type for values you don’t know the type of (like the response of an API call) is “unknown”.

TypeScript does not provide the facilities you describe because there is not a one-size-fits-all solution to the cases that are possible and common in JavaScript.

It is left to the developer to decide how to validate unknown data at the boundaries of the API.

There are third party libraries that facilitate this in different ways with different trade-offs.

  The compiler actively lies to you about the types you’ll have at runtime.
I find this to be rare if you are using strict mode with proper TypeScript definition files for your platform and dependencies. Usually the lie is in your own code or bad dependencies when an “unknown” type (including “any”) is cast to a concrete type without being validated.

  In nearly any other typed language I have some deserialization mechanism.
Could you provide examples? I either don’t understand or I disagree.


> Usually the lie is in your own code or bad dependencies when an “unknown” type (including “any”) is cast to a concrete type without being validated.

Yes, but one of those bad dependencies is the standard library.


When does the standard library lie in this case?


Things are `any` when they should be `unknown` or generic, mostly. Off the top of my head:

- `JSON.parse` and `.json()` on response bodies both return `any`.

- `JSON.stringify`'s `replacer` argument is of type `(key: any, string: any) => any`.

- `PromiseRejectedResult["reason"]` is `any`.

There are certainly many others.


> Usually the lie is in your own code or bad dependencies

The lie is almost always in an external API response from fetch (hence the complaint about “any” above).

> Could you provide examples?

Off the top of my head… Go’s stdlib json.Unmarshal and Rust’s Serde derive Deserialize.


The lie is when your code uses* the “any” value where a concrete type is expected.

I was misunderstanding your point with the deserialize.

Edit: “using” -> “uses”


This can't be solved by static analysis - anything that crosses i/o boundary has to be asserted, refuted or predicated at runtime and you have libraries for it ie. [0] which doesn't throw (based on refutations which can be mapped to predicates without much cost and assertions) or [1] which throws (based on assertions).

Predicates are the most performant but won't give you any indication on why it failed (ie. some nested field was null but was expected to be number etc).

Refutations is great sweet spot as it's fast while giving information about error.

Assertions are slow, but more often than not you don't care.

You can map between any of them, but it doesn't make much sense for mapping ie. assertion to predicate as you'd be paying cost for nested try/catch while dropping error information.

Refutation is great base for all 3.

[0] https://github.com/preludejs/refute

[1] https://github.com/appliedblockchain/assert-combinators


The complaint is that Typescript not emitting any of the type information for the runtime means every library must reimplement the whole TS type system.


Yes, that's true, they could support emitting metadata with explicit keyword which would help and wouldn't bloat anything implicitly, they already do emit code for enums for example.

Personally I'm fan of not introducing new language that runs at comp time, just use the same language to have macros and operations on types for free - just like Zig does it.

Typescript type system is already turing complete so it's not like they'd be loosing anything there.


You might like TS Reset: https://github.com/total-typescript/ts-reset, which fixes this particular problem. I don't personally find it to be a big issue though.

Regarding runtime type checking, if you were to write something that can handle the total space of possible TS types, you would end up with incredibly complex machinery. It would be hard to make it perform, both in terms of speed and bundle size, and it would be hard to predict. I think Zod or perhaps https://arktype.io/ which target a reasonable subset are the only way to go.


This was driving me nuts in a project with lots of backend churn. Runtime type validation libraries like typebox and zod (I like typebox) can really save your bacon.

The downside is the underlying types tend to be more complex when viewed in your IDE, but I think it's worth it.


Here’s a neat trick for those complex types:

  type Identity<T> = T

  // This can be made recursive to an extent, alas I’m on mobile
  type Merge<T> = {
    [K in keyof T]: Identity<T[K]>
  }

  type ReadableFoo = Merge<UnreadableFoo>


You should take a look at https://zod.dev/ if you haven't already - it's a library for runtime parsing that works really well for your use case.

Types are inferred from the schema though personally I like to handwrite types as well to sense check that the schema describes the type I think it does


I’ve used zod and every other schema validator available for this. Some problems:

1. Types are not written in typescript anymore. Or you have to define them twice and manually ensure they match. ReturnType<typeof MyType> pollutes the codebase.

2. Types have to be defined in order, since they’re now consts. If you have a lot of types which embed other types, good luck determining that order by hand.

3. Recursive types need to be treated specially because a const variable can’t reference itself without some lazy evaluation mechanism.

TS could solve all of this by baking this into the language.


1. You can just use `export type Foo = z.infer<typeof fooParser>` in one place and then import Foo everywhere else, without using z.infer everywhere else

2. Use let and modify your types as new ones become available - union them with a new object that contains the new property you need

3. How often are you making recursive types?

I agree that all of this could be made easier, but zod is the best we have and great for most normal usage. The reason TS doesn't want to make this available at runtime is that it means so many changes they make will become breaking changes. Perhaps one day when there's less development on TS we'll see this get added


Including runtime checks would also have performance implications.

I really enjoyed using myzod (more performative, simple, zod) for awhile, but recently I’ve been using Typia, which is a codegen approach. I have mixed feelings about it, and from my own benchmarking it’s performance seems overstated, but the idea is sound: because we know the type, we can compile better, type-optimized serialize/deserialize functions.

As for not littering the codebase with runtime checks, it may be worth reiterating to the person above that you really should only do type determinations at the I/O edges: you parse your input, and it becomes known from then onwards. You runtime type-check your output, and its requirements propagate upwards through your program.


There aren't really "performance implications" for making an impossible thing possible.

The ability to emit a parser/verifier would not require any other runtime or affect the speed of any other code.


Pragmatically, your interest is why I was mentioning typia, which does what you are describing: opt-in parser/stringify/mock-gen codegen derived from typescript.

I think it’s reasonable enough to allow other people to focus on runtime behavior. There’s still a lot to do to model js accurately.

In my personal opinion, the ideal ts would be one where you just write regular js, and the compiler is able to check all of it for correctness implicitly. That would require runtime validators etc to be explicitly written, yes, but you could “just write js” and the correctness of your program could be proven (with guidance to make it more provably correct when it is not yet).


It's also possible -- for specific cases, probably not generally -- define your schema object as const and use type manipulation to generate "real" types automagically. Toy example here: https://github.com/andrewaylett/aylett.co.uk/blob/3fffae1bab...

This lets me run my inputs through a schema validator, and ensure that the type I'm using statically will match the schema used at runtime.


The TS devs have mentioned that they wish JSON.parse returned unknown, but the change is too disruptive now.


It would be a lot nicer if it instead returned some JsonType that’s a union of all the possible JSON values. Anyone know if there’s a good reason why it doesn’t do that?


You can pass an arbitrary rehydration function, which can return non-JSON-representable types


It could look at the return type of your reviver function, or at least whether you passed one in.


There's a big discussion about this: https://github.com/microsoft/TypeScript/issues/1897. The benefit seems extremely limited to me. Valid JSON is obviously a subset of `any`, but I can't think of a situation where that particular specificity provides any value. Can you?


The value is when you’re parsing the JSON afterwards. It’s good to know you can match it exhaustively -- each value is either a Record<string, Json>, Json[], string, number, Boolean or null, and nothing else.

Edit to add: I think “any” is almost always a big cop-out because you couldn’t be bothered figuring out the correct type, and it often causes problems (loss of type coverage) further down the line. I admit I do use “any” in my own code when I really need to, but a library should work harder to avoid it, and the standard platform typings should work harder still.


That's effectively what unknown would be - at least the outcome would mostly be the same. You'll end up narrowing values in just the same way.


What are the common operations you can perform on that union?


You can narrow it exhaustively.


Interesting, I think it indeed falls under "emit different code based on the results of the type system" even though - thank you for the link.

I'm not sure if there is a) any "programming pattern" that can avoid this without other drawbacks and b) if there is any problem with emitting different code based on the types (at compiletime).

I suppose it could lead to breaking behaviour if the typesystem is changed, since it now can impact runtime code. Personally, I think this would be more than worth it, but maybe the typescript team has a different opinion or other reason.


Did the comment you're replying to get edited? They are pretty explicitly talking about statically known type information, not runtime.


They’re suggesting outputting runtime type information by emitting different code based on the type of the value passed to their hypothetical “Type.keys()” function.


Ahh, that makes sense then; thanks for clarifying.


This is why Angular for dependency injection as well as some ORMs require non-standard Typescript emitted during compile time for a long time now. It looks a quite locked-up conflict.


Well that's interesting. That means decorator metadata was a non-goal, despite being supported for ages.


The Reflect.defineMetadata API and the long-supported decorators syntax come from very early versions of Typescript when Typescript was (maybe) more actively trying to steer the direction of ECMAScript by implementing features that were Stage 2 proposals.

Typescript only got official ECMAScript decorator support in the recent v5. ECMAScript decorators only got to stage 3 in April ‘22.

But decorator syntax is just a kind of syntax sugar over passing a function through another function, and you can do that today to achieve runtime type information (see zod etc). Zod could be rewritten using decorator syntax and still be “just JavaScript” while providing compile-time type support.

The distinction being that supporting ECMAScript features is a goal for Typescript, but they were perhaps too aggressive early on in investing in decorators and the Reflect Metadata API. They had the wisdom to put these behind “experimental” flags, but I think they got quite popular within the typescript community due to the early adoption of both Typescript and both those features by Angular, which was really the only major lib using TS for quite some time.


> Typescript only got official ECMAScript decorator support in the recent v5. ECMAScript decorators only got to stage 3 in April ‘22.

Yah, they've been out for ages. It's quite surprisingly how 1. long it's taken ECMA and 2. how quickly TypeScript took advantage of decorator syntax to improve TypeScript. I'd say it's a definite win for us people who love decorators.

> But decorator syntax is just a kind of syntax sugar over passing a function through another function, and you can do that today to achieve runtime type information

I'll have a look at Zod, thank you! I have to admit I like the simplicity of decorators; I'm playing with Dependency Injection and, while the loss of parameter injection is a bit disappointing, there are ways to work around it, e.g.

   @Injectable([Dependency])
   class Service {
     constructor (private dependency: Dependency) { } 
   }
> [...] features by Angular, which was really the only major lib using TS for quite some time.

Definitely. Although it'd be interesting to see how Angular handles the transition away from parameter injection; there's an open issue about it on their GitHub, but from what I can see none of the core members have spoken about it yet. <https://github.com/angular/angular/issues/50439>

The main proposal from a community member is to replace them with the Service Locator pattern (ew). Thankfully someone in-thread provided them with a little wisdom regarding why that's a terrible idea. Here's hoping Angular keeps a nice API.


I recently discovered zod for this kind of thing. It can be a little obtuse when you first pick it up, but it's incredibly cool and let's you do this kind of thing!


You can build this in TypeScript, using type guards and/or type assertions. Doing so manually for each type might be annoying, but it’s fairly straightforward to generalize with eg a wrapper around a generic library like zod.


Another comment implements what you describe using some crazy typing tricks.

=> https://news.ycombinator.com/item?id=36458653


Awesome, exactly the function that I talked about.


For those that are well versed in type theory, what is the neatest solution to this in language design, if there is one at all?

I'm always bothered by the idea that product types can be "more" than, but never "less" than, the type they are assignable to (contra-variance). For sum types it seems to be the opposite (co-variance), so structural typing is easier.

Which reminds me of an idea I've had for a while regarding object oriented inheritance. Why is inheritance always approached in a contra-variant manner? I have frequently wished that inheritance could be conceptualized as a narrowing, rather than expansion, of type. For example, consider an "Animal" object class, which has a "numLegs" attribute. Then "Dog" could inherit from "Animal" by having exactly 4 legs, as a special case.

I'm not sure whether this concept has been implemented in any language, but I've encountered situations where it would have been the perfect way to model entities, frequently enough, that I believe it could be very useful.


I'm not an expert in type theory, but something like Scala's path-dependent types might work. In the following example (which I copy-pasted from [1]), every object of type Foo has some associated type Bar. You can have multiple objects that are all of type Foo, and they might have different Bar types. Therefore, to refer to a Bar type, you have to reference the specific variable you're talking about – such as in "f1.Bar" below, where f1 is a variable name, not a type name.

    trait Foo{
     class Bar
     def doNothing(b: Bar){}
    }
    val f1 = new Foo{}
    val b1 = new f1.Bar()
    val f2 = new Foo{}
    val b2 = new f2.Bar()

    f1.doNothing(b1) //fine
    f1.doNothing(b2) //won't compile
Applying the same idea to TypeScript, you could imagine using 'keyof' on values instead of types. So instead of

    keys<T extends object>(o: T): (keyof T)[];
it would be something like

    keys(o: object): (keyof o)[];
And the keys returned for an object would only let you index into that specific object, not any other object even if it had the same type.

[1] https://wheaties.github.io/Presentations/Scala-Dep-Types/dep...


> For those that are well versed in type theory, what is the neatest solution to this in language design, if there is one at all?

Row-polymorphism instead of structural subtyping.

The idea is that instead of treating `Record<,>` as parameterized by two ordinary types, you parameterize it with a single type-level collection of key-value pairs, and then write functions which are generic over parts of that collection. So the equivalent of a typescript function that works fine over open records would look something like this `<Rest>(obj: { SomeRow, Rest }) => { SomeRow, Rest }`, whereas `(obj: { SomeRow }) => { SomeRow }` says you can't have any extra entries and `<Rest>(obj: { SomeRow, Rest }) => { SomeRow }` says you can have extra entries but they're getting thrown out.

This also lets you reuse machinery between records and sums more easily, since sums are also realizations of rows.

Purescript and OCaml have good row type support. You can also sort of hack it together in typescript if you're very very careful, but I wouldn't recommend it.


> For example, consider an "Animal" object class, which has a "numLegs" attribute. Then "Dog" could inherit from "Animal" by having exactly 4 legs, as a special case.

In Delphi you can do this:

    type
      TAnimal = class abstract
      public
        class function NumLegs(): integer; virtual; abstract;
      end;
    
      TDog = class(TAnimal)
      protected
        class function NumLegs(): integer; override;
      end;
    
    { TDog }
    
    class function TDog.NumLegs: integer;
    begin
      result := 4;
    end;
    
    var
      d: TDog;
    begin
      WriteLn(TDog.NumLegs); // 4
      d := TDog.Create;
      WriteLn(d.NumLegs); // 4
    end.
I've used this several times to good effect. Were you thinking of something else? If so, could you expand a bit?


Meh. I just type

  const realkey = key as any as <type>
And call it a day. I'm 100% certain I've typed fewer characters of that pattern than that blog post.


The author doesn’t address a common work-around for Object.keys() which is a type guard. https://www.typescriptlang.org/docs/handbook/advanced-types....


Type guarding is definitely a pain point moving from JS to TS


For sure. I think most devs are afraid to add more conditionality but even JS warrants a ton of undefined checks, etc because of the language’s generic nature


A small addition: Structural typing means that TypeScript normally allows extra properties, but in contexts where you're immediately declaring and using a type, TypeScript will complain.

For example: `const user = { name: 'Bob', city: 'Reykjavík' }; saveUser(user)` is allowed because of structural typing. But `const user: User = { name: 'Bob', city: 'Reykjavík' }` throws a compile error; you're telling that you're specifying a `User`, but the value you give is not (just) a `User`.


I know this is pedantic but there a logical error in this validation code

    function validateUser(user: User) {
      let error = "";
      for (const key of Object.keys(user)) {
        const validate = validators[key];
        error ||= validate(user[key]); // <-- only first error is assigned 
      }
      return error;
    }

    // instead I would put something like this..

    error += validate(user[key])


Reason this isn't throwing a warning for me (and working as expected)? On v4.8.4.


Of course the validateOptions function is nonsensical. It just checks if any null values exist. You could iterate Object.values() to do the same. This needs a better motivating example.


Calling "Object.freeze()" and then asking for the keys/values/entries of a frozen object should be able to return a properly-typed value.

But it doesn't do this =(


It couldn’t, no: the object being frozen might contain more attributes than its type describes. Typescript can’t know these extra attributes.


At the very least, it could return an intersection type that is equivalent to the known keys in the object so you get autocomplete, but allow other arbitrary properties of unknown type:

    type FrozenObject<T> = { readonly [K in keyof T]: T[K] } & { readonly [key: string]: unknown };

    const x = { foo: 1, bar: "hello" }
    x["baz"] = "blah"

    const frozen = Object.freeze(x) as FrozenObject<typeof x>
    const baz = frozen.baz // No autocomplete for 'baz', but no error either


Similar reason why you can’t have exception types in TypeScript. The underlying runtime and language doesn’t allow it to work.


Here is more explaining of what I mean. Coming from other languages you might think TypeScript can add a `throws` type annotation so caller can expect the type of error in the `catch` block. Something like this:

    function parse(input: string): string, throws SyntaxError {}
However in JavaScript we can not guarantee that when calling `parse` the only error you might get is the error that this function throws. So there is no point in adding type annotation


If typescript had better errors then I probably would support it more as a development language. As it stands it exists almost exclusively as overhead on most user interfaces.


I agree-- in terms of it being almost exclusively overhead.

Personally, I hate typescript. So much redundancy. It's not insanely awful, but just awful and unnecessary enough (except large corporate projects with significant developer churn) that it's a pet peeve of mine.

"Hey, let's just write the same thing 4 times so you know it's a string! Because quotation marks aren't clear enough! Yay!!" "I must redundantly ensure you know this number is a number! Even though you can easily convert it to one if you need to, and move on! Yay"

I'm bummed that it is becoming so common and popular in the industry.


The extra typing is all worth it when I can fearlessly refactor a function or component. I've seen fear of breakage paralyze larger vanilla JS codebases.


Exactly this. I love being confident in making changes to code. Especially once a project grows and other people are working on it.


That actually makes the most sense, of any reasons for using Typescript, that I've heard. Good point.


`let foo = "bar"` makes foo a string

`const foo = "bar"` makes foo a constant string "bar"

Where are you having the redundancy?

I can think of one place that causes me problems, which is typing and destructuring at the same time in function parameters. But there's no type redundancy there, the redundancy is having to write variable names twice.


> I must redundantly ensure you know this number is a number! Even though you can easily convert it to one if you need to, and move on! Yay

I don't get where you're coming from. Yeah you can easily convert something to a number[1], so isn't it a good idea to force a dev to do that easy conversion by emitting a compile-time error if they don't? Because if they don't, and the value doesn't get coerced correctly, then the code blows up at runtime.

[1] Except when you can't and you get NaN


> Hey, let's just write the same thing 4 times so you know it's a string! Because quotation marks aren't clear enough!

I dont follow. Can you provide an example of what you mean?


The entire industry has lost it's fucking mind.

Object.keys returns an array of strings. Non-ennumerable properties and Symbol-keyed properties aren't included (on the off chance that's even relevant). That's it. It's not rocket science.

This article is complete nonsense. "What if there's no e-mail validator!", then write one? Or don't try and validate an e-mail? Like ?????? How do you not know what properties the object you're validating can have? Is this seriously how these devs operate? Just write some code with no idea what data is coming into it and no desire whatsoever to iron that out before continuing?

6 months with a dynamic language, if you actually bother to learn how to code, and you'll start reading and writing code in such a way that it's blatantly obvious what every type is. I cringe when I see people say stuff like "Typescript forces you to think about types!", in practice it's the complete opposite. There's a basically perfect correlation between devs I've worked with that like TS and those that have no idea what's going on with their code and constantly need to bug the other developers for help.

I was gobsmacked the first time I looked over a TS dev's shoulder and saw them trying to make changes without actually reading any of the code first. Zero idea what variable is defined where or what it holds, nah fuck that, just jump to line 148 and hover over a couple of things to try and pick up the thread from there, she'll be right.


Yeah but - it’s nice to be able to do that. Like what you’re describing -

> . I cringe when I see people say stuff like "Typescript forces you to think about types!", in practice it's the complete opposite … make changes without actually reading any of the code first. Zero idea what variable is defined where or what it holds, nah fuck that, just jump to line 148 and hover over a couple of things to try and pick up the thread from there, she'll be right.

Why should you need to do anything else than that? If TS enables better hinting, and you can just make a change without having to sit down and digest and entire codebase first, then why not enjoy taking advantage of that? How is that not desirable? Why make things more complicated for yourself than they need to be, when you can let your tools do that work for you?


If I understand you correctly, you are saying a real programmer would always know exactly what objects are passed to a function, just by reading the code.

This would only be true for very simple code written in a top-down manner. For library code, you dont know how what code will call the library.


I don't mind TS but sometimes it kinda feels like a cult. Certain personalities love to overcomplicate things for zero actual value other than they get to feel like shamans or something


TS is definitely not zero actual value. The decrease in bugs my team has seen since migrating our JS apps to strict-mode TS is remarkable. It’s been ~5 years since then and I can’t imagine going back.


I did not say TS is if no value. I said overdoing it is of no value. Perhaps I should have phrased that better.


Agreed, and it’s extremely difficult to have any objective TS discourse in TS communities. They are extremely dismissive of criticism, constructive or not.


So the real reason is "because of the limitation of the structural typing instead of the nominal one"?


You can do this with all nominal types too. Imagine types Parent and Child. Child has more properties, yet satisfies the contract of Parent. If you enumerate all the properties of some arbitrary Parent p, you may get some extra properties.


that's not how nominal types work: the properties looped over will be only the one of Parent


No, sorry, you're not right. Here's some C# you can run in your browser.

https://dotnetfiddle.net/ZeSNkA


In Java or C# you would need to use reflection to loop over the members of an object, but AFAIK this would yield the members of the current instance, not the nominal type.


No, the reason is that the properties enumeration works on the dynamic type and not on the static type.


The article says that it is because of structural typing, but actually it is because in TS object subtype automatically.

It has little to do with structural typing per se; you would have the same behaviour with nominal classes* and inheritance.

* this is likely why OOP languages have _final_ classes


No, it is unrelated to the nominal/structural distinction.

It is a logical but perhaps counter-intuitive consequence of supporting reflection in a statically typed language. Most languages avoid the issue by either not supporting reflection or by having reflection be untyped (eg returning all members as “object” and let the client cast to the appropriate type). Typescript tries to support type-safe reflection, but this means the types have to be wider than you would intuitively expect, in order to take subtypes into account.


People using a real language like Rust write a 10-100X performant new tool like `rg/fd` that replaces an existing tool like `grep/find`.

People using TypeScript write blog on why Typescript can't even type `Object.Keys`. Someone who only spents time in C++, Go, Java, or Rust would be surprised at this.

Types in TypeScript are like Cheese in Cheeseburger, there is a slice of it but the meat is entirely made up of something else.


If you read the blog post you'd see it's not a "TypeScript can't even type Object.Keys" situation but instead a "it is incorrect to assign it the type you want because of how JavaScript works"


I did read the blog post. The underlying issue might be in JavaScript. But that's an implementation detail.

Here's my question to you why is this problem non-existent in c++/Java/go/rust?


Those languages all have the same issue since it is a logical consequence of having subtypes. You probaly just don’t realize it becase doing anything similar to the example would require convoluted reflection.


Unpopular opinion: TypeScript is overrated.

After 1 year of Vanilla JS you will know all the types by heart.

All the necessary static type analysis will be happening automatically in your brain.

No cluttered type syntax and linter wars needed.


> All the necessary static type analysis will be happening automatically in your brain.

Tell me you’ve never developed large projects on a team without telling me


Tell me large projects did not exist before TypeScript.


Large projects have overwhelmingly been written in statically typed languages throughout the entire history of our industry. Large projects in Javascript is a comparatively recent development, in that context the rise of Typescript is completely understandable.


Sure but why does TypeScript have to be even less graceful than C# which is arguable the Gold Standard of static typing?

It's simply a hurdle rather than a blessing all too often.

No issues with C# however.

I use C# as comparison as it comes from the same house and was arguable used as inspiration for TS.


> why does (language explicitly designed to compile down to JS with 0 adaptation layer) have to be less graceful than (thing made by the same people but with no such constraints)?

Hm.


Genuinely trying to understand.

Argued from first principles:

Since TS is a superset of JS,

we can assert that the both constitute opposite ends on the spectrum of static typing.

Thus, surely there are infinitely many levels of possible gracefulness in-between?

IOW: If JS can live with no constraints, then why can't TS live with with slightly more graceful ones than the current?


That’s called adjusting strictness flags.


Imagine doing something similar in C#, i.e. copying all properties from one arbitrary object to another. This would require the use of reflection and would not be typesafe at all, since reflection in C# is not typesafe.


> All the necessary static type analysis will be happening automatically in your brain.

That's exactly the point: nobody wants to do this in their head, and humans are far worse and less consistent about these kinds of computations than machines. Plus, as the size of a codebase grows, the chance that you have it all in your head approaches 0%.

Also, I'm not sure what "linter wars" have to do with TS?


Static type analyzer wars really.

Sure I say go for it for those super galactic world changing giga projects but most simply aren't that.

A project with well organized, clean code without a lot of junk will be a small, easily overseeable project, for a long long while.


Biggest advantage I've seen from using it, is the ability to quickly get into old code. Sure, well I'm writing it. I have a mental understanding of all the different types, but I don't two years down the line.

Also, when multiple people are working on the same code, it's nice that they all are forced to spell things the same way...


I recently created a new NextJS project.

For my little site I needed an input element with type=file.

So I was getting started, with TypeScript and all, like the swaggest of web developers.

Came the moment for the actual file input:

TypeScript REFUSED to accept the input element's type as "HTMLInputElement" when that is literally its type.

After TypeScript eating up about 1 hour or so of my time, I decided to get rid of that piece of sh*t for squiggly underlining all my code in Red simply because it's too retarded to understand it.

Any of the TypeScript lovers care to explain?

Needless to say, I ultimately went about doing what I wanted in 5 minutes in VanillaJS and was happy ever after.

Call me again when TypeScript does its job correctly.


So you formed your strong opinion, based on one hour of trying it. Gotcha.

As to your problem: Use ˋ... as HTMLInputElementˋ if Typescript wasn't able to narrow your type sufficiently, or you believe that the value of typing this case isn't worth the effort. This should be somewhat rare.

Use ˋ... as unknown as HTMLInputElementˋ, if your idea of the variable is completely different from Typescript. But at that point you likely _have_ made a mistake somewhere.

Use ˋ...: anyˋ if you want to completely turn off checking. In most projects, this has to be explicitly specified for each parameter and declaration.

It gets more verbose, the more unsafe your code is.


It's impossible to try to debug your problem with so little to go on. Maybe the type was unioned with something incompatible. At some points, you need to resort to runtime type checking: `if (x instanceof HTMLInputElement) { // TS will know it's a HTMLInputElement at this point } `

Enabling Typescript on an existing, untyped project is going to be rough to start. You'll need to gradually increase the strictness levels and perhaps work on typing one module at a time. With an entirely new project, start with maximum strictness, and things will come together much easier. You might need to learn to do things in new ways to make it easier to work with TS, but that doesn't mean it's wrong.

Certainly, 1 hour to attempt to use any new technology is way too little time. I'd say a minimum of a week of honest effort with a new toy project is warranted before deciding whether it's not for you.


Just type (… as HTMLInputElement) and be done with it. It’s really not that hard.


Duh, that's what I did. And every other variant one can possibly think of. To no avail.

That's what cost me the hour or two.

This works like a charm in a strictly typed language like C#, but TypeScript just hasn't got it down correctly yet.


Show a sample of the issue if you can.


The code in question:

    (e: Event) => {
        let t: EventTarget | null = e.target;
        // here "t as HTMLInputElement" did not work
        if (t instanceof HTMLInputElement && t.files && t.files[0]) {
            const file = t.files[0];
            if (file) {
                (myImageElem! as HTMLImageElement).src = URL.createObjectURL(file);
            }
        }
    }
TS wouldn't accept it. My guess is it's not cluttered enough.

My final vanilla version OTOH:

    (e) => {
        const t = e.target;
        if (t.files) {
            const file = t.files[0];
            myImageElem.src = URL.createObjectURL(file);
        }
    }
How is the latter not a million times more elegant? Why do I need to clutter everything with types that I won't even think twice about?


That second version is literally 100% verified and valid Typescript.

`e` is inferred from `HTMLProps<HTMLInputElement>['onChange']`

`t` is inferred from `e`

`files` is inferred from `t`

No halfway experienced TS developer would write any of these annotations.

If you got the myImageElem from a ref, you may need to add a !, because yes, it's not technically guaranteed that the ref is set by the time the callback is called.


Well I haven't ever used next.js in particular, but if I go open a .tsx file and add <input type="file" onChange={[final vanilla version code]} />, it works fine.

t gets inferred as `EventTarget & HTMLInputElement`

The only error is that "myImageElem" isn't defined by the snippet, which is to be expected.

If I insert your first snippet, it complains that (e:Event) isn't the right type, but (e) or (e:any) makes the whole thing happy (except "myImageElem").

If I remove the imprecise "Event" typing on e, then your first snippet can be simplified to:

  let t = e.target;
  if (t.files && t.files[0]) {
If I keep (e: Event), then the code with "instanceof" works, as does:

  let t = e.target as HTMLInputElement;
  if (t.files && t.files[0]) {


> The only error is that "myImageElem" isn't defined by the snippet, which is to be expected.

It's the ID of a DOM element. No need to define it (for me). But anyway still good I will try your code, thanks.


In that case the only thing you need to make your vanilla code work as typescript is to put the following somewhere:

  declare var myImageElem: HTMLImageElement;


`(e: Event) =>` means you accept any event whatsoever.


That is probably true, however as I said it should (IMO) still work if I then cast its target to HTMLInputElement.


You’ll know all the types of your own code by heart, maybe.

But I doubt you’ll be able to keep up with contributions from three other devs to a massive codebase.

Typescript makes it easier to understand what’s going on at a glance (or a hover, or a ctrl+click) - I’m fine with loosey goosey JavaScript for a solo project, or even a two person collaboration, but for a big team effort? I’m gonna need those strict typings to help keep everyone’s contributions consistent.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: