I worked very hard on pulling the same trick at Sony. We'd made Vegas scriptable, and, once it was part of Bapco's Sysmark, we received immense support from chip vendors. Machines, engineers, tools, instructions added to instruction sets... It was a nutty time.
Of course, the biggest wins worked everywhere, like the 6755399441055744 single precision rounding trick, or the use of memory mapped file regions to cache the rendering tree. Still, becoming part of a performance benchmark is a great way to get attention.
This is when Chrome on Android finally started to turn a corner, they've done a fantastic job. JS perf has increased 50% from 2016 to 2018 in Chrome/Android. Sadly Android is still hamstrung by extremely mediocre Qualcomm SoCs but they've made huge strides on the software side.
Not talking about Go language.
EDIT: before people accuse me of making a false dichotomy: I acknowledge that there are other uses for computers, but am unable to think of any others where the increased resource consumption would be worth it. Another thing: cell phones, the battery would drain much more quickly if everything were a webapp. Perhaps video game consoles will switch to such a JIT, though...
Not to mention that that benchmark is very synthetic and doesn't really reflect the kinds of speedups that will generally be found.
“There are no financial instruments that will protect you from a world where we no longer trust each other.” https://www.mrmoneymustache.com/2018/01/02/why-bitcoin-is-st...
first two aren't even about cryptocurrencies - every currency is more attractive than bolivar in Venezuela these days. i guess all those people are doing something illegal (surviving?) or will be better off with electricity (spoiler alert - they already have electricity).
the last one is even dumber and coming from supposedly somebody who should know their way around finances and yet quotes like the one you posted or this one:
> These are preposterous numbers. The imaginary value of these valueless bits of computer data represents enough money to change the course of the entire human race, for example eliminating all poverty or replacing the entire world’s 800 gigawatts of coal power plants with solar generation. Why? WHY???
just show how clueless he is.
there is a gigantic difference between not trusting and not having to trust.
and of course market cap numbers are preposterous. it's because they are meaningless and don't represent anything that exists in reality. not the amount of money that was spent acquiring those currencies, not the amount of money that can be made selling those currencies. it's meaningless numbers.
This is the point, though. Everyone expecting financial security has to trust. There is no alternative.
so you're right, i have to trust math and i am ok with that.
You're worried about the wrong player in this game.
cryptocurrencies is simply different financial system where you don't need to trust middlemen.
it's really astonishing how complacent people have become towards trusting middlemen in financial systems. if you ignore the banking services that you're paying for either directly or indirectly via taxes - what is the bank doing for you? why should you pay for having a record in the database? why should bank be involved in facilitating or even censoring your transactions?
i'm worried about exactly the right player in this game.
Seems like the author I cited has it exactly right, then.
is this the way you concede your opinion expressed above was wrong? because you just jumped from "financial system is not a risk, counterparty is a risk" to "but what about transparency and law enforcement"..
Financial security depends on the ability to show who bad actors are and undo their transactions. I don't see how this is possible without some sort of middleman involving civil government. Yes, the current banking system has flaws, but they are not technological flaws and cannot be solved by technological means.
yes, in context of resolving dispute with counterparty
> Financial security depends on the ability to show who bad actors are and undo their transactions
no, that's in my opinion the opposite of financial security. i feel financially secure when i know no government, bank or corporation fuck up can affect the state of my account.
> Yes, the current banking system has flaws, but they are not technological flaws and cannot be solved by technological means.
the flaw of current banking system is humans. humans are often corrupt, incompetent, unreliable and with malicious intent.
basing financial security on assumption that only honest, competent, reliable and well intentioned humans end up in positions of power is obviously wrong.
i don't hold that assumption and think that taking humans out of the loop is the best way to address flaws of existing system.
the fact that flaws are not technological in no way means technology can't solve them.
Software, including a blockchain or Bitcoin, is designed by humans.
I would not be surprised if they come up with a way of directly executing WASM or such. (And then the possibilities for security exploits become a lot more interesting...)
(Though now I can't explain the magnitude of the win...fine, curse JS for both.)
I’m also not sure if that’s a “win” in general. Dynamic languages used doubles since forever now and to me, either A) cpu makers sleep for too long, B) js is such a historical mess that fixing it in hw is reasonable now.
It is interesting how e.g. LuaJIT may benefit from that, but unfortunately it is [formally] unavailable on appstore because of licensing restrictions. As are non-apple fast js engines.
I disagree. My use cases involve loading, processing, rendering, and saving point clouds with millions to billions of points. For precision and file size reasons, coordinates are stored as 3 x int32 instead of 3 x float or 3 x double. Every time I load the points from disk I have to convert them to floats for rendering, or doubles for processing. Vice-versa for storing the processed results to disk.
Just because you don't need it, doesn't mean nobody needs it.
I get storing them as float instead of double but into instead of float for size/precision doesn't really make sense to me. A float is 32 bits, so will be the same size as an int(32). If it's precision you're worried about,you're going to lose the precision as soon as you convert it from an int to a float to use.
Integers on the other hand can be used to represent coordinates in a fixed precision. E.g. if you want to store coordinates in milimeter precision, you multiply your double precision meter coordinates by 1000, and then cast the resulting coordinate to 32 bit integers.
> If it's precision you're worried about,you're going to lose the precision as soon as you convert it from an int to a float to use.
Yes, that is why you convert ints to doubles for processing and only use floats for tasks where limited accuracy is okay, e.g. rendering.
I've plenty of experience with floating point numbers.
> Translating to origin and/or rescaling only works to a limited extent and isn't a very robust solution.
Curious as to why you think this isn't the solution? double precision just kicks the can farther down the line, whereas a fixed point offset origin gives you the "best" of both (albeit with slightly more code)
> Yes, that is why you convert ints to doubles for processing and only use floats for tasks where limited accuracy is okay, e.g. rendering.
Right, so you store as 32 bit values, and do the processing in double precision, but convert to single precision for rendering (from double precision)? So you were never going to store them as float on disk.
I do get this decision - point cloud data for scanned geography tends to be uniformly separated over a large area, a 32 bit float very rapidly starts to accrue significant error if the geometry is large. Presumably he olde fixed point arithmetic would be more precise (for this specific purpose) than a 32bit float (which has thrown away significant amounts of precision for the exponent).
But again, 64bit float has 53bits of precision, which is larger than int32 that is in their data format. It’s also probably enough precision for more or less anything outside of extreme ends of science :)
This isn’t “conversion doesn’t need to be fast”, this is “conversion is so fast already that that won’t be your bottleneck”.
Even if it was slow, given what you are describing I would assume that you performance would be bottlenecked on IO even with a pure software floating point conversion.
In this case I’d trust Filip more :)
I mean I suppose it makes sense for them to do that, especially if the performance benefits are as huge as outlined in this tweet, but damn it feels dirty.
I assume the reason for this is that in JS, as far as I know, all numbers are stored as floats right? So you keep casting everywhere when you need an integer? I assumed that JS implementations where a bit more clever and kept them as ints whenever possible but maybe it's not as simple as I had imagined.
In reality the big win from this instruction is not performance, it’s code size.
So no, this instruction does not explain 40% perf improvement. It’s very easy to see why: for it to be a 40% (or any significant %) win your code would already have to be spending at least 40% of the time doing just double to int conversions. Not even micro benchmarks manage that level of insanity.
Anyone can point to where it is used?
This isn't something a JS developer uses. It's something as JS VM/engine developer uses to make JS faster on that platform in general.
"I took a quick look at Speedometer 2.0 and it seemed to be driven by the speed of the browser cache implementation, as in how much was explicitly in memory, what poked the file system, how async was implemented. Not CPU bound."
2018: I never thought I'd be walking around with a full Symbolics Lisp Machine in my pocket.
However, I like to think about it as carrying a Xerox Workstation or an IBM mainframe on my pocket, as both iOS and Android trace back to them.
Smalltalk, Lisp, Mesa/Cedar, bytecode environments with micro-coded CPUs, or JIT services on kernel with portable executables (IBM/Unisys).
> The FTL JIT was designed as a marriage between a high-level optimizing compiler that inferred types and optimized away type checks
Why not just use types in your language? It's not that difficult and makes a massive performance increase.
Also a lot of those optimizations occur in jits for statically types languages. Because many of the optimizations are functionally equivalent to “static” language ideas like devirtualisation and the like.
SPECint2006 -- 36.93 --> 44.92 is about 20% better
Speedometer2 -- 90 --> 125 is 38% better
That is nearly 2x what SPECint would predict. (It's also not changes in Mobile Safari / iOS 12 because every device benchmarked was on the same version of iOS 12).
Did someone say ARM is RISC?
So risc has slowly become analogous to “more registers, orthogonal instruction set, not x86”
Load/store are still always separate instructions, and instruction length is fixed — that's RISC enough. (But the distinction is not super meaningful these days anyway.)
1. Click to get to jeff's tweet from hacker news.
2. Click Gregs tweet.
3. Click back (works and we get to jeffs tweet)
4. Successive back clicks never get to Hacker News.
Only Apple had the #courage.
Apple designed the chip, and ARM designed the platform and instructions for them to build with.
Apple designs the chips:
Apple has an ARM "architectural license", which is for
companies "designing their own CPU cores using the ARM
instruction sets. These cores must comply fully with the
ARM architecture. Companies that have designed cores that
implement an ARM architecture include Apple, AppliedMicro,
Broadcom, Cavium (now: Marvell), Nvidia, Qualcomm, and
ARM designed that JS-optimized instruction. Apple just follows (and it has to) ARM specification in Apple’s own way.
Don’t confuse between ISA design and its chip implementation design.
Let's not exaggerate nor underestimate the fool's errand of replacing a ubiquitous language.
Not better or worse? It's got a type system with holes you could drive truck through, and more arcane edge-cases than a fantasy book of spells.
> And it's one of the few languages with async-everything.
Just because something is async doesn't make it better or faster, at some point the work has to be done, and for those cases where async does make a difference, pretty much every other language has an async implementation that is either just as good or superior.
> Let's not exaggerate nor underestimate the fool's errand of replacing a ubiquitous language.
Yeah, I think that horse has bolted, but we can stop making it anymore widespread than it absolutely needs to be.
That's most dynamic languages. (Python is a bit more strong than others, sure, but whatever.)
> pretty much every other language has an async implementation that is either just as good or superior
Most other languages have async bolted on and a lot of existing libraries are synchronous. Pretty much all JS embeddings (both the browser and various server side things) exposed everything as asynchronous from the beginning using callbacks — and now callbacks easily get wrapped into promises and async/await works with them very well.
All the things we could be doing and we're making CPU improvements for the flaming pile of hot trash that is JS? Why?
I would expect that the only actual change here is the final rounding and register store which is not significant. Those are already things that have to happen.