Hacker News new | past | comments | ask | show | jobs | submit login
Stage 3 Proposal: Array.prototype.at (tc39.es)
80 points by bpierre 21 days ago | hide | past | favorite | 144 comments



JS urgently needs support for decimals [1] and thus be able to enter sectors such as scientific and financial.

V8 + dynamic language + better math support = perfect environment for many applications out there

[1] https://github.com/tc39/proposal-decimal


Couldn't that be done with WASM today? It looks like a few people have managed to build GMP targeting WASM.

You might miss out on some SIMD operations, but you'd be able to do 90% of what you'd want with scientific calcs. (admittedly, it'd be nice if this were part of the language).


There's still a ton of overhead for WASM and it's not first class. Using WASM is a lot like using C code in python. It's still nice to have features as first class citizens :)


Isn't most python just stringing together c and Fortran libraries that expose a python interface?


It amazes me at Python web engineers and Python ML engineers who go along completely unaware of the existence of the other group in any large numbers. (I say this as someone who went to my local pycon 5-6 years ago and was surprised that ML talks were like 30% of the topics)


Honest (possibly stupid) question: Do people working on statistical and probability application never need a numeric type optimized for probability values (i.e. values between 0 and 1 that can get really close to either; for example the really almost certain value of p = 1 - 10^-13)? In the same way that financial application needs a decimal type?

I’ve only worked on probability applications at a surface level for a tiny bit and never really needed it, but I kept wondering whether there were such a numeric types, and if not, what people did if they needed it.


Log probabilities are used a bit for this, and most numeric packages (numpy for instance) have special functions for computing log(1+x) which are more accurate when x is tiny.


This would be really nice, but the only solution I know here involves working with numbers that represent a function of p rather than p itself. And what you do depends on what you're computing. In general, you'll find lots of useful math functions that have specialized routines around zero and one, like numpy's log1p function, logaddexp, expm1, and company.

Log odds ratios are also good representations that have this kind of accuracy between zero and one. They're good when you need them. (e.g. 100:1 odds are log(100/1) and 1:100 odds are log(1/100))


I see. So basically you just know a bunch of log rules and then map your distribution into a log-distribution. And I guess this becomes second nature to a seasoned statistician. So a dedicated numeric type for probability value is nothing more than a nice-to-have.


Kinda, but to make it more explicit: with floating point numbers you don't even need the explicit log distribution because it's kinda baked in already, with the number split between an absolute and an exponential part. So floats capture an exponential scale of precision naturally, allowing you to write 1e-200 and 1 and 1e200 just as easily, with a common relative precision around each. You use the log scale as a mental model and then just keep floating point issues in mind while you work.

The handy functions are more about making sure that cancellations don't happen by providing a few primitive operations that cover lots of the uses. Like the handy log1p which computes log(1+x). If you did this with the log function, you'll compute log(1+1e-200)=log(1)=0. If you use log1p, you get log1p(1e-200)=1e-200. You avoid the loss of info that comes with adding something close to zero to something that isn't close to zero, which is the real trick. And then if you need something close to one, you probable just use the converse: instead of using p=1-eps, you just work with eps itself.


For financial makes sense, but science should be independent of numerical base?


I think the issue isn't the base, but that Javascript stores all numbers as 64-bit floats.


The decimal proposal is a vague enough that it is unclear if it would improve on that; if it was arbitrary-precision decimals, that would be an improvement. If it is limited precision but better than Number (seems unlikely), it would be an improvement. If it is limited precision but not better range/precision than Number, just decimal rather than binary floating point, its probably not an improvement. The proposal is vague enough that any of those would fit.


JS has BigInt now. You can kinda fake BigDecimal with it, but none of the built-in operators will work.


How do you fake BigDecimal with BigInt? My immediate intuition would have a 3-tuple (a, b, c) which would construct the decimal with a representing the numbers before the decimal separator, b the number of zeroes immediately after the decimal separator, and c the numbers after the zeroes. e.g. 1.000054 would be represented as `[1n, 4n, 54n]` and 3.14159 as `[3n, 0n, 14159n]`.

Is there a better way?


What about a 2-tuple with two numbers, one being the BigInt and one being the exponent. So 1.000054 would be (1000054, -6) = 1000054*10^(-6).

Yours is probably more space efficient though.


This is basically floating-point.

IEEE 754, BTW, allows for 10-based floats, but IDK if anybody uses it.


Yours is more space-efficient for real-world numbers.


Very good point. I'm just now fully understanding why at my last fin-tech job we stored all monetary amounts in pennies. I thought at first it was just ideal to keep everything as integers in the database but I see now that due to our Node backend, something as simple as 0.20 + 0.10 could pose a problem.


That's not due to Node, that's due to IEEE 754 standard floating point operations.


What would the benefits of a language native implementation be, over something like decimal.js?

Is it just a matter of there being a "blessed" implementation for folks to standardize around?


Universal support for third party integrations such as connecting with a database that allows for arbitrary precision decimals, communicating with other services without forcing a use of a random lib (because there are many that do this kind of thing).

Support for this I believe is "low level enough" to ve implemented natively, it's not something like protobuf.


We also have a unix timestamp issue coming up.


    const at = (arr, i) => i >= 0 ? arr[i] : arr[arr.length + i];
Why does the core language need to be extended to incorporate this?


Subtle readability improvements on patterns that see extensive use actually make a difference.

I had the exact same thoughts when `.includes()` was proposed to be added to the spec - it's just `.indexOf() !== -1`! - but time has proven me wrong, and I suspect it will prove us wrong about `at` as well.

I've written an `.indexOf() !== -1` check hundreds if not thousands of times, and I've gotten the conditional wrong enough times to actually need to revert a change in prod.

I have similar thoughts about .at(). Taking an element from the end of the array is ever-so-slightly error prone. Sure, you won't make a mistake this time, or the next time, but write it a hundred times, or a thousand, and I bet a bug will slip in. It's darn near impossible to get `.at(-1)` wrong, however.


That's actually a pretty reasonable argument, the 'length - N' form is actually quite prone to off-by-one errors so .at might help there.


Indeed, and as others have stated it is really hard to allow `[-1]` without breaking old code.

Also: `[1, NaN, "hi"].indexOf(NaN)` is -1 but `.includes(NaN)` returns true as expected.


> `[1, NaN, "hi"].indexOf(NaN)` is -1 but `.includes(NaN)` returns true

Brutal.


Except you would never get to use this method, because it will take years for all browser vendors to implement it, and even then, you still have a number of users that use for a reason or another Internet Explorer.

You need to use a transpiler (like Typescript), but at this point, there no reason to implement that in the language, a transpiler can implement the feature by simply adding the method to the prototype of the object, like this:

    Array.prototype.at = function (i) { return i < 0 ? this[this.length + i] : this[i] }


Because at some point someone will - and probably has - make a library out of this, adding yet one more to the tens of thousands of files that my cruddy server can't handle already because of the sheer quantity.

Plus you're probably missing a few dozen edge cases and optimizations that the native version would incorporate.

We wouldn't have been in node_modules dependency hell if they incorporated a few more things into the standard library or language.


> optimizations that the native version would incorporate

We shouldn't write stuff in native code for performance... We should write the compiler to detect these and output optimized code.


By "native version", I think that Cthulhu meant building "at" into the JS runtime, not writing "at" as a C extension and compiling it into native code.


in my experience, the vast majority of node modules I have installed don't provide these tiny little functions, but are:

- frameworks like react or express and extensions for the same

- meaningful libraries like axios, ramda, date-fns or qs

- transpiler/bundler enhancements like webpack loaders or babel presets

I would prefer that the JS committee focused on bringing meaningful new capabilities to node and the browser that are currently lacking, like they did with the various HTML5 libraries, fetch(), ES classes and so on, rather than providing tiny functions that aren't even providing new functionality (array indexing is easy and idiomatic in JS). If there was no opportunity cost then sure, add tiny pointless builtin functions all day long.


You seem to be garnering a lot of downvotes without replies so I'll bite: it sounds from your comment like you've never looked into the subdependency tree of those frameworks and meaningful libraries that you list (which seems like it would be hard to miss tbh for anyone that's ever opened up the node_modules directory).

Those tiny little libs providing tiny little functions are likely not direct dependencies of your application, rather dependencies of a subdependency of a subdependency of a subdependency of your preferred "meaningful" library.


I'm aware of that. Unfortunately that's always going to be the case that some people who write libraries you depend on indirectly are not very good at programming and lean heavily on other dependencies to provide basic functionality. That will be the case regardless of how all-encompassing the stdlib is, and is entirely dependent on the culture of practice of the language you're using. JS is both beginner-friendly and the culture is focused on personal marketability, so you're going to get a lot of beginners land-rushing to put out basic libraries that don't do anything to cater to other beginners, in order to put the number of stars/npm downloads on their blog or CV. That's not going to change with the number of new core features.


> That's not going to change with the number of new core features.

I... don't really see why it wouldn't? What's your logic here?


A famous example: the is-odd and is-even libraries. It's not the kind of function that would be appropriate for a stdlib in my opinion, and yet developers who presumably don't know about the modulo operator incorporate these libraries into their projects at a high enough rate to engender 700,000/week of downloads.

Even with all the basic functions one could hope for being brought into JS' core, there will still be tons of bizarre micro-libraries like these in the npm repository being used by developers who rely on libraries first when it comes to any given feature, because doing so is part of JS culture for the reasons I outlined before.


ramda and date-fns are full of little such functions... The fact that util packs like these and lodash exist is a sign that Javascript's stdlib lacks mechanisms to deal with various common patterns.

FWIW, I consider `at` to be similar to some of the things you consider "new capabilities". `fetch` vs `XMLHttpRequest` is fairly analogous to `at` vs `arr[i]`, and similar arguments can be made about ES6 classes over prototypal classes, etc.


ramda is an immutable library - javascript is immutable. Some functions of date-fns could probably be part of the core, and in fact that's one of the areas I wish the JS WG would focus on as Date is just not a very useful class.

> `fetch` vs `XMLHttpRequest` is fairly analogous to `at` vs `arr[i]`

If you consider all syntactic sugar to be equivalent, no matter how minor the change, then anything above a basic Turing machine implementation is fairly analogous. The problem with XHR is that the interface was really bad. The interface for arr[i] is not really bad.


I mean it in the opposite sense, actually (that XMLHttpRequest - or more precisely, Microsoft's original version of it - was a leap forward, whereas fetch/axios/friends are, for the most part, merely a convenience over now established capabilities)

Similarly, sure you can do point-free style w/ ramda's `R.add` but realistically, do you really? The fundamental game changer capability is the language's ability to do math in the first place; anything on top is arguably cherry on the cake. `at` seems uncannily similar in that regard.

Wrt something being in a 3rd party library vs stdlib, I'll generally prefer a batteries included approach in JS because module resolution is complex enough to cause difficult problems to troubleshoot (peerDeps hell, complex symlinking semantics, library duplication in bundles, core.js explosion, package manager specific breakages, etc)


Regarding fixing Date, Temporal hit stage 3 recently: https://github.com/tc39/proposal-temporal


Just Google "left-pad" to know why. I think .at() also doesn't have to look up the entire prototype chain so it has performance benefits as well.


    const leftpad = (str, len, char = ' ') => str.length >= len ? str : (char.repeat(len - str.length) + str);
Anybody who installs a dependency instead of writing a one line function is just leaving themselves exposed for no real benefit.

Providing the at function outside of the prototype chain like I did above will not incur the cost you mentioned, even in the unlikely case that such calls are the bottleneck of your application.

Plus, if array lookup calls are your bottleneck then you probably need a different data structure.


Reinventing the wheel is error prone. Noone wants to bloat their code/time with unit tests for all these utility methods.

Just look at your at() function, I already spotted a bug, if a numeric string is passed: `([].length + '-1') === '0-1'`


Quite incidentally, this is fixed with the addition of one character:

    const at = (arr, i) => i >= 0 ? arr[i] : arr[arr.length + +i];


Nit: it throws if you wanna use a negative bigint as `i` hehehe :(

  let i = -1n;
  let test = +i;
  > Uncaught TypeError: can't convert BigInt to number

See this is where standard libraries shine, they consider all the pieces of the puzzle. There's no good reason to disallow bigints here.

P.S: Unary plus operator is neat but this TypeError with BigInt is why I am starting to prefer the more verbose `Number(i)`.


Hmm, I had completely forgotten about that distinction. I also can’t find it in the spec <https://tc39.es/ecma262/>, which says that Number “performs a type conversion when called as a function rather than as a constructor.” <https://tc39.es/ecma262/#sec-number-constructor>. What this means isn’t spelled out, which seems poor. I would have thought it meant ToNumber <https://tc39.es/ecma262/#sec-tonumber>, but that would throw an exception on BigInt (like unary plus does, <https://tc39.es/ecma262/#sec-unary-plus-operator>). So now I’m guessing it means ToPrimitive <https://tc39.es/ecma262/#sec-toprimitive> with preferredType number, which seems to match. This being the case, I believe BigInt is the sole difference between +x and Number(x).


> Reinventing the wheel is error prone.

Having a standard lib for these sort of functions is a common pattern many languages adopt.


Isn't that really a pattern that the runtime adopts (i.e. not the language itself)? For ES, the libraries are dependent on the runtime environment; for browsers, there are the web APIs, and for Node, there's npm. I'm not sure how you could have anything more standard given the nature of things.


This might be just a few levels of people talking past each other, but just in case:

Yes, the full "standard libarary" you can expect to be present on any given JS VM may vary, but there's no plausible reason that e.g. at() should not work on almost any conceivable platform.


> for Node, there's npm

True, NodeJS runtime comes with its own «standard library» but npm has nothing to do with it.


Their example does no argument validation. It doesn't check that arr is array, or that i is an integer, etc. The spec actually addresses the argument validation.


yes that is obvious - my point was that the functionality is minimal. with guards:

    const at = (arr, i) => {
      if(!(Array.isArray(arr) && typeof i === 'number')) {
        throw new Error('invalid arguments');
      }
      // same as before
    };
you wouldn't just commit a single-line function in your code. But there are 10,000 micro-functions that could be in the core of JS. Why bother to add this, especially if it's so easy to implement? There are surely better things the JS committee could be putting their time towards, like providing functionality that is presently lacking in JS (maybe they could finally get around to advancing the pipe operator through the TC stages).


Your number guard is insufficient. NaN, infinity, 2.23 are all Number but not supported values for the function.

This goes to show that it's NOT trivial to implement in a robust way.


If we're going to nitpick, those are actually all valid values for `arr[i]`... (though, to be fair, they don't exactly do what one might necessarily expect)


It is trivial. It requires a few more lines than my example, but my example was intended to show the core use case. All I'm getting in replies is nitpicking about my example case... Can't you just assume when reading that I know how to write some simple guard statements? It's hardly rocket science.


Because it is such a common case?

To invert your argument, why would you want millions of developers to have to waste their time writing this sort of thing for such a common need?

Yes, it's clear that you could write the function correctly.

Could everyone?

What about bugs that aren't obvious? The runtime errors that result?

Why would we want to waste so much developer time and effort globally, when we can just add .at() to the language?


I want you to look at some other languages beside javascript. Take a look at Swift or Ruby. Compared to them it feels like 60% of js code is boilerplate. 200 lines of js can be fit in 60 lines of Swift. All because js stdlib is just sucks. I’m tired of this silly ‘someArray.length ? someArray[someArray.length - 1] : null’, I want just ‘someArray.last()’


You're assuming this is a decision available to each app developer. In fact these dependencies are usually chosen by the developer of a library that's a dependency of a dependency of a dependency of a dependency of a library you choose as an app developer.

Auditing the entire dependency tree of every library you choose is extremely arduous without proper tooling, and the tooling here has never been good (and even the more recently available better tooling is CVE-focused so won't highlight tells like package size/maintenance status/whatever other heuristics you might devise as a proxy for quality).


I agree that dependency hell is a problem, but I don't agree that reinventing the wheel is better. Including things a lot of developers will need in the language is probably the best way to approach things like left_pad, at least given these three options (reinvent, third-party, language inclusion).


or

    const leftpad = (str, len, char = ' ') => str.padStart(len, char);


Haha yes that would be the modern option.


Array parameter should be last, surely? Or why not extend the prototype? If only we had some way to standardise such decisions...


So instead of fixing the language and allow to use array[-1], they add an other way to access array element. This is why I don't like JavaScript, instead of fixing feature, they add new feature to fix previous feature.


There is existing code out there which relies on negative indexes to return undefined. Or relies on being able to assign items to negative indexes and then retrieve them, this is perfectly valid code: `a=[];a[-1]=42;console.log('the answer is',a[-1])`. The web tries really hard to be backwards compatible and not break existing code.


Maybe we should re utilize the "use strict"; type of metadata. Something like "use es2021" to enable negative indexes etc.


TC39 explicitly rejected this sort of approach a few years ago, because of the unpleasant way it would fork the web. They _did_ automatically clean up some behavior in a module context when you knew you were using a modern JS engine, but they kept that to a minimum.

Unfortunately, I couldn't find any links to articles written at the time about this, but they definitely did consider "use" options or script type="es2021" kinds of options.


I can see where they're coming from; backwards compatibility is often a hard problem, and this is no exception. But the downside of this approach is that we'll be living with these sort of backward-compatible hacks for decades to come :-/

Imagine learning JS in 5 years: "you can index arrays with subscripts, but actually, if you want to get negative indexes you need to use .at()"

vs.

"Always add 'use es2021'; at the top of your scripts. You can index arrays with subscripts."

I'm hardly a huge Perl fan, but their approach made a lot more sense IMHO and is a good trade-off between making sure existing code works, and not complicating the language for future use. The "fork" (which seems a bit hyperbolic to me) is a very minor short-term pain at best for significant long-term gains.

I don't know of any other mainstream language that's so conservative as JavaScript in never breaking anything, even optionally with flags.

Besides, compatibility is important but not holy. Some programs probably also rely on "[9] * 2" resulting in Number 18, but we really ought to fix that (with or without flag) IMHO because these sort of gotchas result in people writing bugs every single day where it works for an array length of 1 and then it has 2 values and you get NaN. The pain of breaking backwards compatibility is minor compared to the pain of new bugs being created every single day.

Golden opportunities are being missed here, and it's a shame.


I see two scenarios for ES2042 (possibly both).

0. It will end up completely unnoticed, because everyone is using any language you like and compiles to web assembly, which is the defacto standard for more than a decade.

1. It's like a new C++, because of the many TC39 stage 3 proposals that has been added over the years without the possibility to ever fix the language in order to not fork the web.


I expect what we’ll end up with is languages like TS supporting a flag to convert all/most index lookups into .at() notation. Or maybe it’ll just be an eslint flag.

I absolutely agree that some metadata at the top of a module enabling behavior like this would be ideal.


IIRC Google actually experimented with the idea of an even stricter mode in V8 at some point but dropped it when it didn't materialize the perf gains they were hoping for.

From a spec perspective, ES6 modules were a good milestone to "flip the switch" over to strict mode by default, but even with that being a fairly successful strategy (IMHO), it still left some nasty corner cases around the language (namely, there are now two distinct top level grammars, which led to the whole .mjs bikeshedding rabbit hole)


I've been thinking this for ages.

We need more special contextual comments to change a file or scope to behave better so we can move forward and scrape away all the bad legacy of JS.


> We need more special contextual comments to change a file or scope to behave better so we can move forward and scrape away all the bad legacy of JS.

I don’t think making the JS world into even more of a “set of subtly different languages that look mostly similar” is necessarily a solution so much as an extra problem.


How about deprecating that for a few years then? Doesn't seem good to keep the behavior, given that it will also be confusing in the future.

But perhaps we just don't know enough, and they will add the `at`, and at some point actually do bind `arr[index]` to use the implementation of that function?


There's a large body of existing code out there, some of which might rely on the current behavior but never be updated. I doubt any length of deprecation period would solve this issue.

Instead, adding .at() allows having the new feature now, and in a way that's possible to polyfill for backwards compatibility.


The web as a platform is the most elaborate and epic showcase at not breaking backwards compatibility in the entire history of computing.

In the face of disasters like Python 2->3 and Apple M1 macs no longer being able to play Starcraft Remastered, the web is an aspirational beacon.

Let’s keep it going.


Isn't Windows even more epic?


Windows is very epic, but they did break compatibility a few times: the new driver model since Windows 7 (?) for example.

The Web probably has a longer history of not breaking things, but also has a smaller feature set to keep track of than Windows.

All in all, very hard to compare. But both undoubtedly epic.


The new driver model is Vista-era, I think.


For all the (often deserved) hate Windows gets, in particular the user space API’s, I still find the chaos incredibly exciting and an invitation to hack together all sorts of strange things in strange manners. I’m sometimes surprised by the levels of backwards compatibility and the “obsolete” technologies that still work fine.


You might well be right! Would be fun to see them go head-to-head over the title.


The success of Windows to maintain backwards comparability is probably some of the inspiration for the people working on JS. Windows has demonstrated that it's possible to maintain backwards compatibility for decades.

It's not easy, and it certainly leads to annoying platform quirks, but it is possible.


How would you deprecate it? There's tons of browsers and JS runtimes out there and they all adopt features at different speeds. Undoubtedly some browsers would never adopt the new feature, which means websites will simply break, and the web will become even more fragmented than it already is.


New features are added to JavaScript very carefully to avoid breaking existing code on the web. The downside is you often end up with multiple ways to do the same thing, but there are ways to mitigate this, like using a linter to enforce using a modern subset of the language.


That's starting to sound an awful lot more like C++.


No, even then, they're fundamentally different.

An organization can go through and update its C++, because ultimately they're distributing binaries (or doing everything internally and not distributing anything at all).

Web "pages" aren't called that for no good reason. If in 2005 you bought a novel, or some punk writer–artist's printed pamphlet, and now you can't read it because in the meantime some engineers changed a spec somewhere, then that would be a failure, not just in the small, but on a societal level. Just rev the language is something that people who spend 40+ hours in an IDE or programmer's text editor think up when they're used to dealing in SDKs and perpetually changing interdependencies and fixing them and getting paid handsomely for it. But that's not what the Web is. The Web is the infrastructure for handling humanity's publishing needs indefinitely.

To rely upon another observation:

"[This] is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don’t think they need to design past the current release."

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


Your post sounds good... Until you realise that nearly any nontrivial web page from 10+ years ago is broken today...

No Flash... Iframes don't work properly anymore... HTTPS servers from 10 years ago are unsupported by todays browsers... Most of the IE hacks no longer work (remember progid:DXImageTransform?)... Any images/resources hosted elsewhere are likely now nonexistent...

Plenty of web features have been introduced and then dropped just a few years later. Backwards compatibility is great... But if it's practically broken anyway, I think there is a good argument for breaking it further. People who need to read an old page will probably need to use IE6 in a VM anyway.


The problem with this argument is that it demands we apply a false equivalence. The key word in your comment:

> hacks

Flash was not standardized. Same with IE's proprietary recommendations (and Mozilla's for that matter—XUL is proprietary, even though people often use "proprietary" as an antonym for "open source"). Most of the "web features" that people have in mind are in the same boat: experimental and draft-level proposals that eventually fall by the wayside for one reason or another. The Web is actually the single most successful attempt at a vendor-neutral, stable platform that exists. It's why we're having this conversation now.

The argument is that, because some people did something hacky or bleeding edge and then bled from it, then there's no real point in any amount of stability, so we should punish everyone. What a double whammy that would make for! First, you spend all your time taking care to do things correctly, so you pay the penalty inherent in that—what with moving more slowly than all those around you—and then someone decides, "ah, nevermind screw the whole thing", doubles back on the original offer and then breaks your shit? I can't say I'm able to abide by that. Imagine all your friends getting drivers licenses and receiving a bunch of speeding tickets for their recklessness, then one day you get pulled over and ticketed, too, regardless of the fact that you weren't speeding.


> nearly any nontrivial web page from 10+ years ago is broken today

Can you provide some examples? In my experience broken 10+ year old websites is the exception, not the rule. And most of the exception is because flash (which has a workaround; plus most popular flash websites have been ported).


It's not surprising given that JavaScript was designed and deployed world-wide by C++ developers.


> fixing

I dislike the abuse of this word. Lacking some shorthand notation doesn't make the language "broken".

It reminds me of handling support tickets where clients say "X needs to be urgently fixed" even though X has never been possible. (Should it? Often yes, but it doesn't mean it's broken.)


The problem with JS language development is the mountain of code that it would break if you do anything except append features.

    array[-1] = “foo”
This already works in JS but doesn’t do what you expect and just assigns the property.


To be clear this is equivalent to array["-1"] (unless you do horrible things overriding the built in types). Since arrays are "just" objects in JavaScript it is completely valid to add a property called "-1" to it.


Javascript array indexing is a MESS. This

    array["foo"] = "bar"
is valid javascript.


    array["at"] = "lunchtime";
Is also valid javascript... And will break functionality in this proposal!


I think this is the point of the stage 3 proposal.

Browser makers start implementing the feature and releasing it in the development and beta versions of their browsers. Then if the users of the experimental features start noticing that webpage break, the proposal will get an update.

If I remember correctly, this exact thing happened to `Array.prototype.flatten` which got renamed to `Array.prototype.flat` after it was realized that the former broke a lot of legacy webpages (and after a long discussion of `Array.prototype.smoosh`[1])

1: https://developers.google.com/web/updates/2018/03/smooshgate


In fact, it already happened to this proposal, which started life as `Array.prototype.item` before it turned out that some libraries were using the presence of a `.item` property to duck-type DOM collections:

https://github.com/tc39/proposal-relative-indexing-method#we...


Why does this make it a mess? Array already has a bunch of properties that aren't elements in the Array that come from the prototype, like join and slice. Your example is just adding a custom property to the array.


Because in most other languages, an array is an enumeration of values. That's it.

In Javascript, arrays are actually dictionaries. But not full dictionaries, rather just dictionaries that can have either a string or numeric key.

It's messy because an array in javascript isn't just "an array" it's this mismesh of features that are unexpected to a new-to-javascript developer.

Unexpected is the enemy of readable code.


No, arrays are just objects with a magic "length" property (technically: a special [[DefineOwnProperty]] internal method which sometimes also mutates "length"). Like objects, they only support strings as keys.

c.f. https://tc39.es/ecma262/#sec-array-exotic-objects


It’s only unexpected if you’re bringing in expectations from somewhere else. Coming from JS as basically my first language, it was surprising and a bit annoying that you couldn’t assign properties to hardly anything in other languages, even functions in languages where functions were supposedly first class.


Adding this wouldn't be backwards compatible, so it can't be done. Simple as that. It's not broken, either - it just doesn't do the thing you want.


It is consistent with slice, though. Neither changes the implementation of the array, they just provide ergonomic benefit to the developer.


If I understand this correctly it is to add negative index support to arrays. How will findIndex work when this is introduced? It currently uses -1 as a return value when no element satisfies the testing function.


It’s not adding negative indexes, a negative number will index from the end.

So [0, 1, 2, 3].at(-2) === 2


That would mean getting the last element of the array would be [0,1,2,3].at(-1).

I don't like that very much.

Mind you, I don't like the idea of using [0,1,2,3].at(-0) either...


Don’t think of it as counting from the end of the array — think of it as counting in reverse from the 0th element. .at(1) gets the next element (index 1) while .at(-1) loops around and gets the “previous” element (index 3, in this case).

Or, if it’s more intuitive, you can think of the array index as an unsigned integer where the max is equal to the length of the array. If you try to assign -1 to a uint8, the result will be 255 — the highest possible value (the last index).


Good thing we don't have to stagnate progress just because what we have will never be perfect.


Python also uses negative indexes in this way, see https://docs.python.org/3/library/stdtypes.html#common-seque...


Array.prototype.at would only ever return the value of something in the array at the index requested (or undefined). It'd accept a negative offset, but that's not the index of the thing you're looking for; it's an offset from one end of the array.


You can’t have a negative array index. Passing a negative number will just make this function count backwards instead of forwards.

[1, 2, 3, 4].at(0) === 1

[1, 2, 3, 4].at(1) === 2

[1, 2, 3, 4].at(-1) === 4

findIndex’s behavior is unchanged.


The MDN documentation for Array.prototype.at is easier to understand than the linked proposal: https://developer.mozilla.org/docs/Web/JavaScript/Reference/...


In the proposal text, what's the meaning of the '?' and '!' - for example in "Let O be ? ToObject(this value)"


Roughly speaking, '?' means that the operation might throw an exception, which should be propagated if thrown, and '!' means that the operation should never fail. (In Rust terms, '? ToObject(this value)" -> "ToObject(thisValue)?", and "! ToObject(this value)" -> "ToObject(thisValue).unwrap()".)

See: https://tc39.es/ecma262/multipage/notational-conventions.htm...


This will be handy for a multitude of string libs, for example string diffing (fast myers diff, ...)


Oh actually I thought '\u{1f4a9}'.at(0) === '\u{1f4a9}' but np, after reading the docs, it's just like charAt except for negative indices

Quite disappointed so, my parent comment doesn't stand too


Am I reading this correctly — it's an alias for arr[i] and str[i]? I know languages like Ruby have arr.at(i), but why?


In addition to negative index support, a function is more easily composable e.g. converting a list of indices to a list of objects: `indices.map(myObjectArr.at)`

I've lost count on how many times I've wrote this function manually.


> `indices.map(myObjectArr.at)`

That works in this case, but eta-reduction is not generally safe in Javascript due to the variadic nature of many functions, so I wouldn't recommend it in production code. For example `indices.forEach(console.log)` does not work as one would expect.


Better a function that works in some cases than not having a function at all. Also, by using TypeScript definitions you pretty much always know what's coming in and going out and VS Code will yell at you if you're trying to do something stupid.


That won’t work in this case. TypeScript won’t yell at you if you pass a function that doesn't use all its arguments. If a second number argument is ever added to `at`, your code using it this way will break and TypeScript will not warn you.

This blog post explains why what you’re proposing is dangerous: https://jakearchibald.com/2021/function-callback-risks/#type...


I guess I'm then somehow doing subconscious mental gymnastics to prevent this, because me and several of my colleagues have coded like this for years and it hasn't been a problem even once.


TypeScript definitions don't block and VS Code does not warn for `indices.forEach(console.log)`

When using eta-reduction in Javascript both functions and all their (optional) arguments have to be known by the programmer and future programmers, instead of needing to know only the argument-slots being used by (x,y,...) => ...

It also defends / insulates against more parameters being added in the future.

Additionally, the way `this` in javascript works (or doesn't for a lot of callbacks) also pushes against using eta-reduction.


>TypeScript definitions don't block and VS Code does not warn for `indices.forEach(console.log)`

Sure, it's a valid way of logging all the arguments that pass through forEach. I don't see a problem here?

>When using eta-reduction in Javascript both functions and all their (optional) arguments have to be known by the programmer and future programmers, instead of needing to know only the argument-slots being used by (x,y,...) => ...

My VSCode setup shows all arguments of functions automatically. Also, I always avoid optional arguments in my code and writing functions that take in a variable amount of arguments or arguments of different types. I always refactor these out of my codebase.

>Additionally, the way `this` in javascript works (or doesn't for a lot of callbacks) also pushes against using eta-reduction.

`this` is another smell that I always avoid using in my codebase, and refactor code that uses it to work without `this`. I thought `this` being harmful is common knowledge?


A function that works in some cases will be a nightmare to debug. The pattern of `indices.map(myObjectArr.at)` is discouraged in JS because it often fails in unexpected ways. For example -

  ["1","2"].map(parseFloat) // works as expected
  ["1","2"].map(parseInt)   // nope


I've learned ages ago that parseInt cannot be used like this, so it's not a problem for me, but I thought linters already take care of this case? Also I wasn't talking about using parseInt, a broken function like this so I'm not sure what the quirks of old, widely known bad parts of Javascript have to do with using functions as arguments, which is one of the most powerful features of the language.


There's nothing broken about parseInt. The problem is, it can take a second argument(radix), and .map will feed it current item's index as the second argument(and the whole array as the third, but that one will get ignored).

const arr = ['1','2','3']; arr.map(parseInt) is equivalent to: [ parseInt('1', 0, arr), parseInt('2', 1, arr), parseInt('3', 2, arr) ];


I know why it works this way. I'm not claiming it has a bugged implementation, I'm saying it's broken by design.


That will break with

> Uncaught TypeError: Cannot convert undefined or null to object

because `myObjectArr.at` doesn't bind `this` to `myObjectArr`. When the internals of `map` call the `at` function, `this` is just `undefined` instead of the original array and it'll throw.

Luckily Array.prototype.map takes a second argument just for this purpose

    indices.map(myObjectArr.at, myObjectArr)


Oh wow, I had to try it to make sure. I can't believe how broken the fundamentals of JS are.

There should really be a whole new standard library made from ground up without `this`, mutability and all the other legacy stuff.


The alternative being the pretty-much-just-as-short indices.map(i => myObjectArr[i])?


Composability is not about syntax but being able to express code as values. You can have complex behaviors from pure functions that are guaranteed to work and can be recomposed into more and more complex structures.

It's not a big thing of course with such a simple example, but just try to program without defining a single function and you'll see how tiresome it soon begins to write everything out manually.


Proposal hosted on GitHub explains the reasoning behind it https://github.com/tc39/proposal-relative-indexing-method


A quick read suggests that it adds negative indexing from the end of the array.


Imho that violates the principle of least astonishment - that "at" would have such different behavior from "[]" for negative numbers.


That's the whole point of this though... Currently you can't use negative indexes, but this would make them usable.


Oh, I'd assumed main point was to provide a composable function.


Accessing negative indexes currently returns undefined. This adds support for indexing from the end of an array with negative numbers.


This works, but not the way you'd think:

    const arr = [];
    arr[-1] = 'Hello world';
    console.log(arr[-1]);
    // Hello world
Instead of using the negative index, it stringifies the "-1" and uses it as the key of an object both when writing and reading. Thus, arr.length still remains 0


It's a function object then.


Im a JS veteran, and not a single word in the article made any sense to me.


This is going to confuse C++ developers where vector::at() explicitly throws exceptions on out of bounds access

But I guess the intersect between C++/JS developers is too small to care


As one of those c++/js developers I'd just say that I do not map concepts from one language to another because it rarely works anyways.


I agree. I would expect it to throw an exception.

However I think there is a better way to support that - make `throw` an expression rather than a statement. Then you can do

    a[10] ?? throw new Error()
which currently doesn't work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: