Hacker News new | past | comments | ask | show | jobs | submit login
Why iPhone Xs performance on JavaScript is so good (twitter.com/codinghorror)
276 points by Bootvis on Oct 8, 2018 | hide | past | favorite | 124 comments



@steipete puts it pretty well: JavaScript really made it. We now tweak CPUs to make it faster.

https://twitter.com/steipete/status/1047415826083729408


That was a trick Dave Fotland used in the '80s to make his Go program, "Many Faces of Go", faster. He donated some key code from his evaluation function to SPEC and they used it as one of the parts in the SPEC integer CPU benchmarks. CPU makers tweaked CPUs to do well on those benchmarks, and hence on his code.


I've talked with him about those days... Though I haven't chatted with him about Go since AlphaGo flipped the whole world over.

I worked very hard on pulling the same trick at Sony. We'd made Vegas scriptable, and, once it was part of Bapco's Sysmark, we received immense support from chip vendors. Machines, engineers, tools, instructions added to instruction sets... It was a nutty time.

Of course, the biggest wins worked everywhere, like the 6755399441055744 single precision rounding trick, or the use of memory mapped file regions to cache the rendering tree. Still, becoming part of a performance benchmark is a great way to get attention.


I highly recommend this article in that same vein

http://benediktmeurer.de/2016/12/16/the-truth-about-traditio...

This is when Chrome on Android finally started to turn a corner, they've done a fantastic job. JS perf has increased 50% from 2016 to 2018 in Chrome/Android. Sadly Android is still hamstrung by extremely mediocre Qualcomm SoCs but they've made huge strides on the software side.


I was confused by the words "Go Program" "in the '80s" in one sentence. OP is talking here about the Go board game played by a computer AI.

Not talking about Go language.


That's a wonderful bit of social engineering.


I had to read that comment a couple times before I realized you were referring to the board game and not the programming language :)


"The Birth and Death of Javascript" becomes more and more real

https://www.destroyallsoftware.com/talks/the-birth-and-death...


I still don't buy it. Javascript might buy a 4% performance improvement, but the increased resource usage makes that impractical for most scenarios. For server use, the increased wear makes it cheaper to buy more computers and have them last longer. For application use, performance is not relevant enough to make it interesting. So really the only possible application is video games.

EDIT: before people accuse me of making a false dichotomy: I acknowledge that there are other uses for computers, but am unable to think of any others where the increased resource consumption would be worth it. Another thing: cell phones, the battery would drain much more quickly if everything were a webapp. Perhaps video game consoles will switch to such a JIT, though...


With projects such as https://github.com/rianhunter/wasmjit it looks like this talk is basically coming true.

Not exactly Javascript but running untrusted code in a safe language in the kernel can give already performance improvements for some workloads due to avoiding system call overhead. It will be interesting to see where this goes in the future.


Oh, to be sure, the technology will be there (and not unlikely make it into game console). I just struggle to see any other inlet for it.

Not to mention that that benchmark is very synthetic and doesn't really reflect the kinds of speedups that will generally be found.


Catching up with 1961.


I mean Bitcoin is pretty much built on “increased resource consumption is worth it”. But I get your point.


It’s not. It’s built on “here’s a way to convert energy into financial security and have a transaction platform on top of it”.


Any person in the situation where “financial security” based on government-issued currency is tenuous enough to make cryptocurrency an attractive option is either 1. doing something illegal, or 2. would be better off with access to the energy used.

“There are no financial instruments that will protect you from a world where we no longer trust each other.” https://www.mrmoneymustache.com/2018/01/02/why-bitcoin-is-st...


there are 3 assertions here and they are all dumb.

first two aren't even about cryptocurrencies - every currency is more attractive than bolivar in Venezuela these days. i guess all those people are doing something illegal (surviving?) or will be better off with electricity (spoiler alert - they already have electricity).

the last one is even dumber and coming from supposedly somebody who should know their way around finances and yet quotes like the one you posted or this one:

> These are preposterous numbers. The imaginary value of these valueless bits of computer data represents enough money to change the course of the entire human race, for example eliminating all poverty or replacing the entire world’s 800 gigawatts of coal power plants with solar generation. Why? WHY???

just show how clueless he is.

there is a gigantic difference between not trusting and not having to trust.

and of course market cap numbers are preposterous. it's because they are meaningless and don't represent anything that exists in reality. not the amount of money that was spent acquiring those currencies, not the amount of money that can be made selling those currencies. it's meaningless numbers.


> There is a gigantic difference between not trusting and not having to trust.

This is the point, though. Everyone expecting financial security has to trust. There is no alternative.


the alternative is math. when you sign a transaction and it gets included in the blockchain - i don't need to trust anyone that i got the money, i don't need to fear the transaction will be reversed due to some banking policies, i don't need to fear my account will be closed, etc.

so you're right, i have to trust math and i am ok with that.


You still have to trust your counterparties, though, and they will always be less trustworthy in aggregate than the functioning of a financial system.

You're worried about the wrong player in this game.


no, that's not how it works. counterparty risk exists irrelevant of which financial system you operate in. you may be paying insurance to reduce the damages, you may sue them in court of law for breaking contract - all of that is irrelevant to financial system.

cryptocurrencies is simply different financial system where you don't need to trust middlemen.

it's really astonishing how complacent people have become towards trusting middlemen in financial systems. if you ignore the banking services that you're paying for either directly or indirectly via taxes - what is the bank doing for you? why should you pay for having a record in the database? why should bank be involved in facilitating or even censoring your transactions?

i'm worried about exactly the right player in this game.


“...[Bitcoin] also has some ideology built in – the assumption that giving national governments the ability to monitor flows of money in the financial system and use it as a form of law enforcement is wrong.”

Seems like the author I cited has it exactly right, then.


he has some things right, but not nearly enough.

is this the way you concede your opinion expressed above was wrong? because you just jumped from "financial system is not a risk, counterparty is a risk" to "but what about transparency and law enforcement"..


They are the same problem, and not a jump at all. You even used the phrase “court of law” yourself.

Financial security depends on the ability to show who bad actors are and undo their transactions. I don't see how this is possible without some sort of middleman involving civil government. Yes, the current banking system has flaws, but they are not technological flaws and cannot be solved by technological means.


> You even used the phrase “court of law” yourself

yes, in context of resolving dispute with counterparty

> Financial security depends on the ability to show who bad actors are and undo their transactions

no, that's in my opinion the opposite of financial security. i feel financially secure when i know no government, bank or corporation fuck up can affect the state of my account.

> Yes, the current banking system has flaws, but they are not technological flaws and cannot be solved by technological means.

the flaw of current banking system is humans. humans are often corrupt, incompetent, unreliable and with malicious intent.

basing financial security on assumption that only honest, competent, reliable and well intentioned humans end up in positions of power is obviously wrong.

i don't hold that assumption and think that taking humans out of the loop is the best way to address flaws of existing system.

the fact that flaws are not technological in no way means technology can't solve them.


> the flaw of current banking system is humans. humans are often corrupt, incompetent, unreliable and with malicious intent.

Software, including a blockchain or Bitcoin, is designed by humans.


A clock is designed by human. It doesn’t rely on human to tell time. You can figure out the rest I hope.


Something designed by humans can have design flaws.


yeah, i guess we should throw all those watches away.


I like the part about the Bay Area being a radioactive wasteland. :)


Technically, he only said it was an "Exclusion Zone", so it could refer to real estate prices in the 2035 Bay Area for non-quadrillionaires.


Fallout: New Francisco


We're firmly within NCR territory here.


The Hobologists send their regards.


ARM did something similar with Java before:

https://en.wikipedia.org/wiki/Jazelle

I would not be surprised if they come up with a way of directly executing WASM or such. (And then the possibilities for security exploits become a lot more interesting...)


The only thing they could do for Wasm would be to allow sub-process-granularity hardware memory access virtualisation and easily-enforceable float rounding modes afaik


Wasn't ARM's choice directly related to (or maybe influenced by) Android though?


No, it was influenced by J2ME


Jazelle predates Android.


In retrospect, it’s almost surprising that there was never a push for hardware acceleration of common JavaScript operations.


[flagged]


100% of users of mobile devices use JavaScript. 98% of them use it heavily in their web browsers. To ignore it, or try to pass it off as insignificant, is folly.


Oh, words go playing. Who uses that modulo 2^32 thing heavily in their browsers? Folly is wasting world’s resources on ineffective crappy implementation of unused features. Obviously it seems reasonable to fix hw. If I were Apple, I wouldn’t even think to reason with js socium.


ALU ops are cheap, interconnects are expensive. Curse JS for scatter-gather, not for its FCVTZS details.

(Though now I can't explain the magnitude of the win...fine, curse JS for both.)


Apple now tweaks cpus to make their js engine faster. This doesn’t sound as cool, but is more true.

I’m also not sure if that’s a “win” in general. Dynamic languages used doubles since forever now and to me, either A) cpu makers sleep for too long, B) js is such a historical mess that fixing it in hw is reasonable now.

It is interesting how e.g. LuaJIT may benefit from that, but unfortunately it is [formally] unavailable on appstore because of licensing restrictions. As are non-apple fast js engines.


Any JS engine on an ARMv8.3 chip could benefit from this instruction, not just Apple's JS engine. Sure you can't put other JS engines on an iPhone (at least not without jailbreaking), but this says ARMv8.3 adds it, not "Apple's specific CPU", though I have no idea if there are any other ARMv8.3 chips at the moment.

As for other languages that use Doubles, this instruction isn't for general-purpose Doubles. CPUs already handle those. This instruction specifically implements the exact semantics that JavaScript wants. Now, maybe some other dynamic languages would want the same semantics when converting Doubles to ints, I don't know. But it's by no means a given that they would. Also, most languages don't use Doubles for all numeric types anyway.


This instruction cannot result in a significant performance improvement for any js code that isn’t absolutely perf bound on just converting floats to integers. If your code is successfully making that your bottleneck your code has problems. None of the major benchmarks (I can’t even think of micro benchmarks that could really achieve this) are spending significant time doing double to integer conversions.

And as a nail in the coffin for this nonsense: javascriptcore does not use or even emit this instruction: https://mobile.twitter.com/saambarati/status/104920213252247...


> If your code is successfully making that your bottleneck your code has problems.

I disagree. My use cases involve loading, processing, rendering, and saving point clouds with millions to billions of points. For precision and file size reasons, coordinates are stored as 3 x int32 instead of 3 x float or 3 x double. Every time I load the points from disk I have to convert them to floats for rendering, or doubles for processing. Vice-versa for storing the processed results to disk.

Just because you don't need it, doesn't mean nobody needs it.


> For precision and file size reasons, coordinates are stored as 3 x int32 instead of 3 x float or 3 x double

I get storing them as float instead of double but into instead of float for size/precision doesn't really make sense to me. A float is 32 bits, so will be the same size as an int(32). If it's precision you're worried about,you're going to lose the precision as soon as you convert it from an int to a float to use.


Single precision floats loose precision as coordinates get larger, hence "floating" point precision. Point cloud data often consists of outdoor/aerial scans covering multiple kilometers and floats cannot accurately represent large coordinate values like that. Translating to origin and/or rescaling only works to a limited extent and isn't a very robust solution.

Integers on the other hand can be used to represent coordinates in a fixed precision. E.g. if you want to store coordinates in milimeter precision, you multiply your double precision meter coordinates by 1000, and then cast the resulting coordinate to 32 bit integers.

> If it's precision you're worried about,you're going to lose the precision as soon as you convert it from an int to a float to use.

Yes, that is why you convert ints to doubles for processing and only use floats for tasks where limited accuracy is okay, e.g. rendering.


> Single precision floats loose precision as coordinates get larger, hence "floating" point precision.

I've plenty of experience with floating point numbers.

> Translating to origin and/or rescaling only works to a limited extent and isn't a very robust solution.

Curious as to why you think this isn't the solution? double precision just kicks the can farther down the line, whereas a fixed point offset origin gives you the "best" of both (albeit with slightly more code)

> Yes, that is why you convert ints to doubles for processing and only use floats for tasks where limited accuracy is okay, e.g. rendering.

Right, so you store as 32 bit values, and do the processing in double precision, but convert to single precision for rendering (from double precision)? So you were never going to store them as float on disk.


Presumably they don’t think that’s the problem because doubles already exceed the precision of the integer types they’re using.

I do get this decision - point cloud data for scanned geography tends to be uniformly separated over a large area, a 32 bit float very rapidly starts to accrue significant error if the geometry is large. Presumably he olde fixed point arithmetic would be more precise (for this specific purpose) than a 32bit float (which has thrown away significant amounts of precision for the exponent).

But again, 64bit float has 53bits of precision, which is larger than int32 that is in their data format. It’s also probably enough precision for more or less anything outside of extreme ends of science :)


You should consider using WebAssembly if you process so many numbers.


Javascript is surprisingly fast at processing that many numbers, using TypedArrays. You just had to avoid DataView up until now but the V8 team recently fixed the performance of DataView so it should have similar performance as TypedArray access and therefore be around 10x faster in future versions of chrome.


Although they are fast, TypedArrays and DataView only deal with setting/getting data from a buffer. They are totally unrelated to numerical calculations mentioned in OP.


Unless you care about the Z flag the instruction doesn't offer that much (perhaps longer pipeline due to reduced size)


Storing to memory is orders of magnitude slower than an floating point conversion.

This isn’t “conversion doesn’t need to be fast”, this is “conversion is so fast already that that won’t be your bottleneck”.

Even if it was slow, given what you are describing I would assume that you performance would be bottlenecked on IO even with a pure software floating point conversion.


Jeff Atwood/DHH’s conclusion regarding Speedometer 2 is not completely true according to Filip Pizło (https://webkit.org/blog/author/fpizlo/): https://twitter.com/filpizlo/status/1049132270773198848

In this case I’d trust Filip more :)


As I said in my reply to that, it could also be in combination with the much faster memory / caches on A12 -- see https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re...


https://developer.arm.com/docs/100069/latest/a64-floating-po... (FJCVTZS: Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero) is the instruction they're talking about. One main difference (vs FCVTZS) is that it sets the Z flag depending on whether the conversion was exact, and it's mod 2^32 on overflow.


I think this is the point where it starts being hard to argue that ARM is RISC anymore. x86 has BCD support and special facilities to handle NUL-terminated strings (i.e. C-strings) but now ARM has instructions with "javascript" in the name. At least Jazelle was a separate extensions.

I mean I suppose it makes sense for them to do that, especially if the performance benefits are as huge as outlined in this tweet, but damn it feels dirty.

I assume the reason for this is that in JS, as far as I know, all numbers are stored as floats right? So you keep casting everywhere when you need an integer? I assumed that JS implementations where a bit more clever and kept them as ints whenever possible but maybe it's not as simple as I had imagined.


Not all numbers are stored and operated on as doubles, no. They are treated as if they were though. V8 and Spidermonkey (idk about the others) store and handle integers that fit in 32 bits as 32-bit integers.


That's not the right metric for determining whether it's RISC or CISC. See my earlier comment at https://news.ycombinator.com/item?id=18074292


This explains everything. BTW does float=>int conversion a lot in JS other than the implicit 32bit bitwise operation?


it really does not - if you’re running js that spends enough time doing double->integer conversion that that is a major bottleneck you might get some win. The instruction is not intrinsically faster than a regular float->integer conversion, the only real changes are to behavior around sentinel values. The biggest cost in floating point to integer conversion is the conversion itself, not handling of edge cases.

In reality the big win from this instruction is not performance, it’s code size.

So no, this instruction does not explain 40% perf improvement. It’s very easy to see why: for it to be a 40% (or any significant %) win your code would already have to be spending at least 40% of the time doing just double to int conversions. Not even micro benchmarks manage that level of insanity.


Ooh follow on demonstrating that this instruction is not making js faster: JSC does not currently emit that instruction. So given it’s never used it seems unlikely to be the reason.


Let’s count how many of js users did know about this “feature” before this thread (those who don’t understand what it “does” count too).

Anyone can point to where it is used?


This isn't something a JS developer uses. It's something the CPU uses with JS someone wrote


A less confusing way to put it might be this:

This isn't something a JS developer uses. It's something as JS VM/engine developer uses to make JS faster on that platform in general.


It’s an implicit part of every bitwise operation in JS.


Without clicking the link i knew this was going to be Jeff Atwood - something about iPhone's JS performance seems to crack him up perpetually! :) Also see Dan Kaminsky's reply -

"I took a quick look at Speedometer 2.0 and it seemed to be driven by the speed of the browser cache implementation, as in how much was explicitly in memory, what poked the file system, how async was implemented. Not CPU bound."


That's pretty serious, right? Changes the entire conversation if this is not truly improving execution speed?


Yes - and if it does it doesn't matter for this particular benchmark which isn't CPU bound.


JS perf has tightly tracked single threaded CPU perf for a long, long time -- as long as I can remember. That said, it is certainly possible memory speed is also a factor, see https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re...


2008: I never thought I'd be walking around with a full Unix machine in my pocket.

2018: I never thought I'd be walking around with a full Symbolics Lisp Machine in my pocket.


2028: Why does battery life still suck?


The Web still needs to catch up with some of Symbolics Lisp Machine development environment features though.

However, I like to think about it as carrying a Xerox Workstation or an IBM mainframe on my pocket, as both iOS and Android trace back to them.

Smalltalk, Lisp, Mesa/Cedar, bytecode environments with micro-coded CPUs, or JIT services on kernel with portable executables (IBM/Unisys).


there's a lisp phone now? where can i buy one?


So Apple has its own browser, its own JIT compiler(1), which emits its own instruction sets, which then run own its own CPUs.

Heck let’s propose the JPU (JavaScript Processing Unit) for server-side codes.

1. https://webkit.org/blog/5852/introducing-the-b3-jit-compiler...


Imagine if Apple had its own operating system running its own static compiler!


With the end of Moore’s law, that may be the way forward, making cpus with more of the software implemented in hardware.


What I find most interesting about that link is this:

> The FTL JIT was designed as a marriage between a high-level optimizing compiler that inferred types and optimized away type checks

Why not just use types in your language? It's not that difficult and makes a massive performance increase.


Because JavaScript does not have types?

What you’re essentially asking is: why are you supporting JavaScript?

Also a lot of those optimizations occur in jits for statically types languages. Because many of the optimizations are functionally equivalent to “static” language ideas like devirtualisation and the like.


According to https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re... reason is new memory subsystem. The new chip is not just fast at Speedometer 2 but %40 faster almost all at other benchmarks. Also, all iPhone devices are fast with iOS 12 JavascriptCore.


Per https://images.anandtech.com/doci/13392/SPEC2006-eff_575px.p...

SPECint2006 -- 36.93 --> 44.92 is about 20% better

Speedometer2 -- 90 --> 125 is 38% better

That is nearly 2x what SPECint would predict. (It's also not changes in Mobile Safari / iOS 12 because every device benchmarked was on the same version of iOS 12).


It's not 40% faster at "almost all other benchmarks". In Geekbench 4 single core, it's about 15% faster, and in many other benchmarks about the same. That's why the official guidance from Apple is that the XS should be 15% faster. Having seen the other low level benchmarks, I was really surprised when I saw the Speedometer results..


This says Webkit doesn't emit these instructions yet: https://bugs.webkit.org/show_bug.cgi?id=184023


Kind of ironic given that safari continues to lag behind in the context of implementing web APIs. Heck, they still don't even support IntersectionObserver, which was introduced in like 2014. I really do share the sentiment of others in calling Safari the new IE; developing for safari on iOS is a total drag. I guess the performance team and specs teams must be compartmentalized :)


I think that it's inevitable path of processor evolution. When it's hard to increase performance, processors will include more useful bits of functionality for popular runtimes.


As if double to int/ptr was js-specific.


No, but this does it the way JavaScript wants it. FCVTZS has always been therr. FJCVTZS[1] is new.

[1]: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....

Did someone say ARM is RISC?


The traditional (eg mips) definition of risc was that the cpu should not hide how it operated - in essence, the compiler was required to schedule operations efficiently itself for both performance and correctness. That’s why MIPS and others have things like branch delay slots. It turns out that that is a terrible idea - it means binaries are tied to a specific micro architecture, let alone a separate implementation of the same ISA. Traditional risc also did not support floating point or integer division (both in software), except again software is necessarily slower than a hardware pipeline.

So risc has slowly become analogous to “more registers, orthogonal instruction set, not x86”


> Did someone say ARM is RISC?

Load/store are still always separate instructions, and instruction length is fixed — that's RISC enough. (But the distinction is not super meaningful these days anyway.)


Indeed, it could be said that ARM not being pure RISC is what keeps it competitive with x86. I wonder how long it'll be before x86 also gets a similar set of instructions...


The instruction is just specifying particular semantics, that doesn't really disqualify it from being RISC. The instruction doesn't do more than the baseline variant.


The javascript specific bit is that errors and out-of-range values are handled according to js semantics


The only thing different from the regular conversion operators is how bounds and sentinels are handled. In reality it is mostly just a code size improvement.


If there are any twitter devs here, clicking Greg Parker's tweet breaks the back button for me.


What browser? Working in latest stable Chrome on Windows.


Detailed steps: Firefox on windows.

1. Click to get to jeff's tweet from hacker news.

2. Click Gregs tweet.

3. Click back (works and we get to jeffs tweet)

4. Successive back clicks never get to Hacker News.


Can’t reproduce on Safari here.


Fixing javascript with hardware? Its not exactly genius but it seems worth it.

Only Apple had the #courage.


Apple just has a shorter distance between CPU engineering and software engineering departments, which allows them to coordinate more and quicker, and bring something like this to market faster. It's not courage, it's reading the writing on the wall and being able to cut through the red tape faster.


Its just a joke about baking the drawbacks of javascript into the hardware. Its a good change that's probably overdue at this point.


ARM did it, not Apple.


I think it's most accurate to say that Apple did it and ARM helped.

Apple designed the chip, and ARM designed the platform and instructions for them to build with.


Apple isn't using the instruction (see other comments), I don't know how you determine providence for features, but basically ARM licenses including a space for private ISAs where custom instructions can live, but you still have to get permission for an instruction afaik (essentially ARM may choose to make it part of public ISA, rather than it just being your own instruction).


Apple did it, not ARM.

Apple designs the chips:

https://en.wikipedia.org/wiki/Apple-designed_processors

Specifically,

    Apple has an ARM "architectural license", which is for
    companies "designing their own CPU cores using the ARM
    instruction sets. These cores must comply fully with the 
    ARM architecture. Companies that have designed cores that
    implement an ARM architecture include Apple, AppliedMicro,
    Broadcom, Cavium (now: Marvell), Nvidia, Qualcomm, and 
    Samsung Electronics.
https://en.wikipedia.org/wiki/ARM_architecture#Licensing


This is an instruction specified in the ARMv8.3‑A instruction set. It was specified and designed by ARM, not Apple. Apple is merely the first to ship silicon implementing v8.3-A.


Again, ARM did it, not Apple.

ARM designed that JS-optimized instruction. Apple just follows (and it has to) ARM specification in Apple’s own way.

Don’t confuse between ISA design and its chip implementation design.


Don't think for a sec that ARM doesn't design ISAs without talking to vendors like Apple.


Thanks for the correction.


The ISA is all ARM the microarchitecture is the vendor i.e. Apple.


Hardware instructions created to handle JavaScript specifics makes me so very sad. What a terrible state of affairs.


Nah. Tweaking rounding modes or overflow flags costs very little. Observe there's nothing being proposed for NaN-boxing, hidden class check support, etc. If someone goes all-in on JS you'll known it.


Why?


Because, despite our papering over of some of it's warts, JS is still fundamentally a pretty terrible language; and rather than everyone trying to make it be better, shouldn't we devote time and hardware efforts like this to either language agnostic improvements, or to languages that are better designed?


Well, it's not terrible nor worse than other dynamically-typed programming languages. And it's one of the few languages with async-everything.

Let's not exaggerate nor underestimate the fool's errand of replacing a ubiquitous language.


> Well, it's not terrible nor worse than other dynamically-typed programming languages.

Not better or worse? It's got a type system with holes you could drive truck through, and more arcane edge-cases than a fantasy book of spells.

> And it's one of the few languages with async-everything.

Just because something is async doesn't make it better or faster, at some point the work has to be done, and for those cases where async does make a difference, pretty much every other language has an async implementation that is either just as good or superior.

> Let's not exaggerate nor underestimate the fool's errand of replacing a ubiquitous language.

Yeah, I think that horse has bolted, but we can stop making it anymore widespread than it absolutely needs to be.


I agree about the type system, but as far as async is concerned, javascript with its (now standardized) promises is miles ahead. The main advantage IMO is not to gain performance, but to allow concurrency without data races (as required by a GUI that needs to wait for a server). If you try async programming in C++ or Python you'll quickly discover that half of the libraries are pre-async and block, and you'll have to resort to threads and still worry about data races.


Python has easy-to-use shared-nothing message passing concurrency libraries based on threading, so no, you don’t have to worry about data races any more than you do with JS.


> a type system with holes you could drive truck through

That's most dynamic languages. (Python is a bit more strong than others, sure, but whatever.)

> pretty much every other language has an async implementation that is either just as good or superior

Most other languages have async bolted on and a lot of existing libraries are synchronous. Pretty much all JS embeddings (both the browser and various server side things) exposed everything as asynchronous from the beginning using callbacks — and now callbacks easily get wrapped into promises and async/await works with them very well.


I agree.

All the things we could be doing and we're making CPU improvements for the flaming pile of hot trash that is JS? Why?


Maybe because it's a considerable part of the code that CPU's are executing these days? Certainly UI stuff that should be responsive and fast.

Even many 'native' apps, mobile or desktop (electron) are written in javascript, and how much of your time do you spend in your browser or electron apps every day? Think vscode, atom, Spotify, Slack, facebook, reddit, twitter, ...


Floating point to integer conversion instructions already exist - this changes sentinel value and rounding conversion behavior only, which is a minor conditional change at the end of the existing conversion pipeline.

I would expect that the only actual change here is the final rounding and register store which is not significant. Those are already things that have to happen.


[flagged]


We've already asked you to stop being nasty in HN comments. Continuing to do that will get you banned, so please stop.

https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: