Hacker News new | past | comments | ask | show | jobs | submit login
Prepack helps make JavaScript code more efficient (prepack.io)
836 points by jimarcey on May 3, 2017 | hide | past | favorite | 224 comments



I just ran this on a huge JS project that has a quite intensive "initialization" stage (modules being registered, importing each other, etc.), and prepack basically pre-computed 90% of that, saving some 3k LOC. I had to replace all references to "window" with a fake global object that only existed within the outer (function() {..})() though (and move some other early stuff with side effects to the end of the initialization), to get it to make any optimizations at all.

Very impressive overall.


Wow. That is pretty impressive. Although as far as the changes you needed to make are considered, my understanding from a brief skim of the page was that `__assumeDataProperty` would be sufficient for supporting use of `window`.


It depends on what you use from window. If no functions on `window` are called during initialization you could just use `__assumeDataProperty` on all the properties of `window` you access.

If you call a function on `window` during initialization you would need to use `__residual` to emit the code into the residual program (assuming the call has no side effects on the heap that need to be seen by other initialization code). This is because otherwise Prepack cannot know how the abstract function would modify the heap (if it is an arbitrary function it could modify the global object for example), so it is not safe to execute.


How was the speedup? That's what I would use this for, not some small minification gains.


I'm gonna guess nothing. Static inline isn't going to help a language that needs to get compiled again in browsers. In all likelihood an engine like chrome will do all the optimizations they're doing and more.

It's like optimizing handwriting for readability than typing it out. You're running a transform on the code that changes the format completely, how optimized the original form is doesn't matter.


This sounds like very wrong reasoning. Yes, Chrome probably does some of this stuff, but that comes at a time cost that we could take at compile time, instead of at run time in the user's browser.


Someone on twitter seems to have done an experiment about startup time for a trivial app comparing the clojure compiler with and without Prepack:

https://twitter.com/roman01la/status/859849179149021184


Why guess when you could test it?


Do you happen to know if this is similar to what Google's Closure Compiler for javascriptw ould do?


It says on the bottom of the page that prepack optimizes for performance/less computation while closure optimizes for file size.


Closure Compiler does much of the same things; but I think prepack is looking to do considerably more compile-time evaluation; closure compiler only does some of this.

I think prepack will also generally generate larger, bulkier code than you might get out of closure compiler. When I give prepack code from closure compiler, it seems to make the code much larger but change nothing. Your mileage may vary.


Seems to not support some implementations of require() - did you have any problems?


I hate to bring this up whenever I see a Facebook project, but it still warrants saying: the patents clause in this project, like in others including React, is too broad. I really wish they made a "version 3" that limited the scope of the revocation to patents pertaining to the software in question, e.g. Prepack, React, rather than a blanket statement that covers any patent assertion against Facebook. While I suppose the likelihood of this occurring is small, I can imagine a company holding some valid patents, such as something related to VR, that aren't related to Prepack that Facebook infringes upon, as well as using a software that Facebook produces like Prepack, sue Facebook for infringement, and then losing the right to use Prepack as a result. From my understanding these kinds of clauses are beneficial overall, but the specific one that Facebook uses is too broad.

Tangentially related: what would happen if you did sue Facebook for patent infringement, and continued to use this software?


> sue Facebook for infringement, and then losing the right to use Prepack as a result.

You do not lose the right to use Prepack. You lose the right to use whatever patents Facebook may or may not have on the technologies underpinning Prepack, if any.

If you believe Facebook has a lot of strong patents relating to Prepack, the value of the patent grant is high, and the cost of losing it is high. If you do not believe Prepack is encumbered by Facebook patents, then the value of the grant is nil, and the cost of losing it is nil.

> Tangentially related: what would happen if you did sue Facebook for patent infringement, and continued to use this software?

In my view, almost certainly nothing, because I don't think they have any patents on the underlying tech. But if they did, then they'd be added into the ongoing patent fight, and would give Facebook marginally more leverage when negotiating the final settlement.

I rather suspect that the patents Facebook has on other non-Prepack things would be much more decisive.


IANAL, but most likely: nothing. Less likely: Facebook counter-sues you for patent infringement and (maybe?) has a slightly better case as long as you keep using it. Your lawyers tell your engineers to rewrite everything without Facebook technologies and it's all a big pain in the ass and you regret ever suing Facebook for patent infringement.


Would the same still apply to the output of the programs, e.g. could you keep the optimized code generated by Prepack?


That's for the court to decide, assuming Facebook has patents on Webpack. To minimize liability, probably best not to.


This gets brought up every time, even after many times it's been clarified (even by lawyers) that it gives you more rights overall (as the downsides to it are true either way).

Remember - without that patent grant you have no rights to any of Facebook's patents anyhow. With it, you do.

So the worst case is that you'd be in the same situation if you didn't have the grant.


They could use some proper, standard license, though. Then people wouldn't complain. Like Apache license.


The licenses that you are thinking of, are all related to copyright, not patents. Patents are a completely different beast.

I personally think that patents have no place in contemporary society, but that's just, like, my opinion, man.


The Apache license has had patent clauses for more than a decade.

http://en.swpat.org/wiki/Patent_clauses_in_software_licences...


> I personally think that patents have no place in contemporary society

Amend that to Software Patents and... I'm basically with you. Generally I think they are useful as a concept but I think they've become a bloated mess.


I know of at least two things that I think are pretty reasonable and (probably) patentable: Apollo Diamond's vapor deposition process, and those north/south printable magnets.

That said, I think I broadly agree with you about software patents; one reason why is that they're the glitter of intellectual property.

If you buy an Apollo diamond or one of those magnets, the use, possession, modification, etc of those items are not covered by the patents on the diamond (AFAIK, IANAL). With software patents, it seems to be the case that the "final product" IS covered by the patents.

This seems like it makes the presence of two separate categories pretty damn clear.


No, I explicitly mentioned Apache because it has patent clauses.


Yes, I know; I mentioned that my understanding is that they're more beneficial. The beneficial aspects of the clause don't forgive the broadness of it, however.


It's broad necessarily otherwise it wouldn't cover/protect authors broadly.


No, that question is unsettled.

If you know better, you might want to give an answer here:

https://law.stackexchange.com/questions/14337/q-about-conseq...


False: an implicit patent grant means that by open sourcing a piece of software you imply that people can, you know, use it.


Some people think so. Others aren't so sure. US courts have not yet ruled on it.

However, even the people who DO think an implicit grant exists would mostly agree that the implicit grant is not sublicensable, which makes it a horrible mess and probably unusable.

An explicit grant is strongly preferable, IF people can agree on the terms. Facebook's terms are on the harsh side, but there's clear advantages to it existing.


> US courts have not yet ruled on it.

They sort of have. A patent has to be a major and "dominant" part of the implicitly licensed tech for it to be granted.

Basically - all existing case law says that you get some rights from implicit grants but it's also far less than explicit ones like FB's


Not only does not all of the case law say this, that's not even what that case was about nor what that language said: http://en.swpat.org/wiki/Implicit_patent_licence#USA


Thanks for the info.


Careful. I don't know what his motivations are, but the info is misrepresenting both the context and the meaning of the language he references: http://en.swpat.org/wiki/Implicit_patent_licence#USA


If you license something under BSD and continue to license under the BSD, the question of sub-licensing seems moot.


And the weakness, legally speaking, of implicit patent grants was one major reason why GPL moved from v2 to v3.

Explicit is much much safer for users of software, legally speaking, because implicit grants have to be settled in court.

If you really want to go to court over it, and can afford it, then you may be right. Want to test that against Facebook? They reasoned (quite rightly) that it would be silly to suggest that.


What would their case be? "Yes, we gave away the software and documented it and paid designers to design it all pretty and spent marketing effort promoting it but nobody is allowed to use it because it's patented technology"?

Has any court ever ruled that a permissive license like BSD does not include a patent grant when it says "use in source and binary forms, with or without modification, are permitted"? Because that seems flat-out silly.


Patents doesn't mean "nobody is allowed to use it"; patents mean "you are required to license the patent from the patent-holder, which usually requires paying royalties."

I could totally see the point of a FOSS software project that implements a patented algorithm, where people work together to improve the thing but everyone who uses it still has to get a license from the patent-holder. (For a recently-top-of-mind example, Fraunhofer's MP3 decoding patent.)

Thus, it's not obvious that a FOSS license automatically implies patent release. Fraunhofer could have open-sourced some reference MP3 encoder themselves, without releasing the MP3 patents.


I think the situation you mention has a significant distinction: Fraunhofer (or whoever) would presumably advertise that they expect you to get a patent license to use the code in a product. Either they'd explicitly state that up front, or their lawyers would say it with the first politely worded letter to someone who starts using the "open source" code.

Software distributed under those terms would no longer adhere to OSI's definition of open source (I think it violates points 1, 3, and 6) nor the FSF's Free Software Definition (points 2 and 3).

It might not stand up in court, but I do think it's obviously unethical to release patent-encumbered "FOSS" without mentioning your patents and then demand users obtain a license after the fact.


> It might not stand up in court, but I do think it's obviously unethical to release patent-encumbered "FOSS" without mentioning your patents and then demand users obtain a license after the fact.

I would be amazed if something like this does not stand up in court. Read the BSD license — it's really relatively simple english.


You're asking the wrong question.

There is almost no case law that supports implicit patent grants and what little there is requires the patents be ones...

"that dominate the product or any uses of the product to which the parties might reasonably contemplate the product will be put" -- HP vs O-Type Stencil (1997)

So, again, very limited. Very dangerous. Very very open to being sued and crushed economically.

This is exactly why FB's patent grant provides you more rights that you get without it. There is no ambiguity, no fussing over what patents may or may not apply and be covered by the case law. It's explicit and better for you.

Long story short: in many ways it's less safe to rely on implicit grants.


But isn't it the case that any patents that would affect the use of react/prepack would "dominate the product or any uses of the product..."?

Yes, the grant is better than nothing, but also, the risk if the grant is removed seems minimal.


Any patents? Not at all.

It's of course possible that they have (or will have) some that would be covered by an implicit grant - though you'd need to go to court to confirm that.

But it's quite possible that they could have many patents that cover non-substancial parts. Guess what? You're now in violation of those if you haven't already licensed them.

So, again, even in the best case the implicit grant still requires you to go to court to maybe get clear of some liability.

Explicit ones have you covered from the start. More rights for you, broader coverage for you.


It seems very unlikely to find a way to use these products in an unintended way that would violate some tangentially related patent, but a) it's all hypothetical and b) one should never underestimate the ability and willingness of IP lawyers to allege an infringement.

"You have stored the software on electronic media and used that media as a doorstop, violating our access control patents by keeping the door open."

Hmm. Has that been filed yet?


I can read and understand the MIT or BSD licenses. That is an extremely valuable property of a contract to me. Both clearly state that I may modify and distribute the software.

Adding a huge pile of details about what exactly that means makes me rather nervous. I am not a lawyer, and I worry that somewhere in that huge pile of explicitness that there are consequences that I don't anticipate, and that don't match up with the broad simple language those explicit details replaced.


That "pile of explicitness" replaces a pile of implicitness wrapped up in case law and statutes that are relevant to any case that arises but you don't know about. Because what you read in a license is not what matters--how do you know you understand them as they will be interpreted by a court of law? What do you not know that you do not know?

(This is why companies have legal teams.)


This sounds like FUD. When has a simple license like BSD been interpreted to not grant rights to the patents implemented in the technology being licensed? The language there is really clear.


I don't know. Turn it around: when has it been interpreted to grant rights to patents implemented? Is there case law that establishes estoppel for such a thing? Has it been tested? Are your lawyers good enough?

These are questions that you should be asking because the name of the game is minimizing risk.


Still FUD. When someone distributes something under BSD and spends non-negligible resources promoting that software, they lose the ability to sue people for using it, because the language of the license says "you may use this". The risk is 0 and I would be surprised if this has even been argued in court because it's so trivially clear.


Don't be "surprised if", don't make blurfy engineer claims--cite your sources or withdraw your claims of safety that do nothing but encourage other people to undertake risk on behalf of your ideology.

The sooner the software developers of the world realize that there are reasons that we hire lawyers for legal tasks just like we hire programmers for programming tasks, the better. The profound arrogance you are choosing to exhibit regarding complex professions you don't fully understand is one of the absolute worst things in tech.


If I can't trust the justice system to interpret a simple contract plainly, then I don't have a chance of actually understanding the legal implications of a more complex contract.


I don't understand this perspective.

The justice system is like a computer. It has no common sense; it has no "Do What I Mean" button. You have to write things out in full explicitness, for it to do anything predictable/sensible at all.

And just like with explicitness in programming, explicitness in a contract doesn't translate to "more things that could go wrong"; it's instead fewer degrees of freedom. More things pinned down; fewer left to interpretation.


I expect that unexpected results may come from unexpected scenarios, but not in relation to basic questions that appear to already be addressed in the contract, like "can I use this software?" If I can't trust that a statement means what it says, no amount of elaboration can clarify the contract.


I get what you're saying, but calling it "like a computer" is a little misleading and feeds that weird habit amongst technical people of assuming that The Letter Of The Law is all that matters, when really it's the boatload of precedent, moral justification, and historical context that informs that law.

It ain't hard fact until a judge says it is, and even then it can be moved with a big enough lever. That that specificity reduces axes of freedom on which to have what you think is obvious be not obvious is why I upvoted you, because that's the super important part everybody ignores.


Right, you can't actually "rules lawyer" in real law; there is a spirit of the law that informs connotation of the text where edge-cases are concerned.

A good lawyer tries to phrase their contracts et al to expose the connotation of the relevant laws in the text, so that you don't need to read the laws, only the text. Sort of like explicitly including default parameter assignments to a function.


Yeah, totally--I knew you knew, just didn't want the implication to leave room to well-actually.


You seem convinced that your interpretation of a "simple" contract is the only obvious one and that any other interpretation is unecessarily convoluted. But I don't think that is the case at all. Simplicity almost always comes at the expense of clarity, which is why simple documents like the MIT license or the US constitution produce so many conflicting interpretations. Not because there are bad actors; just because it is unclear.

If you ask a lawyer what is best, they will likely recommend a professionally drafted license like the Apache License v2. Just as you may not understand every aspect of a doctor's diagnosis or prescription, it may not be feasible to understand all the relevant statutes and case law. But I think it makes more sense to defer to expertise advice than to bury one's head in the sand.


You're assuming that because a contract is simple you're more likely to understand it correctly. That is horribly misguided.

In all likelihood you don't fully understand contracts regardless of length because you're unaware of the entire legal environment they exist in. The reason contracts often have weird stilted phrases is that they have specific meanings established over decades if not centuries of legal battles over those semantics.

A good example for doing this right are the Creative Commons licenses: they come with a summary in plain English for humans and a full license text for the legal system.


AIUI this issue still hasn't been settled to everybody's satisfaction.


I can't edit my post since it's been too long, but I do want to clarify that when I say valid patent, I mean a patent that has been granted by the patent office and isn't held by a non practicing entity.

I could easily see a company 1) using something Facebook has created, like React, for a VR based UI; 2) patenting something related to their VR technology; 3) Oculus making the same kind of technology and not paying the company royalties/licensing its usage; and, 4) suing Oculus for patent infringement.

Of course it is within Facebook's right to make the patent clause as broad as it is, but I don't feel like it is fair that the company above would not be able to use React because Facebook infringed on a patent unrelated to React. It would be a lot nicer if Facebook either used an already existing license like Apache 2, or updated the patent clause to be more specific.


Let me get this straight.

- Facebook says, "hey you can use this software, no copyright strings attached."

- Facebook says, "also, any patents we have to that software, here's a license. One stipulation, if you sue us for patent infringement, we revoke that license."

- Your hypothetical company, let's call it Acme, seeing the value of getting to use great software, for free, with no copyright or patent royalties, takes Facebook up on their charitable offer.

- Acme then, sues Facebook for patent infringement.

- As per the license, Facebook revokes their free patent license they gave to Acme.

And somehow the victim in this story is Acme? That's pretty rich.

Here's an idea: if part of your strategy as a company is using your patents to sue people, maybe don't expect those other companies to give you their patents for free?


Are you arguing that if you have a patent you shouldn't be able to get royalties on it, or arguing that patents aren't even necessary? I don't understand your argument. Facebook can do whatever they want with their patents, including license any patents related to the specific software, just like they could sue other companies for patent infringement.

You assume that I am making Acme out to be a victim, which isn't true. I'm arguing that Facebook should give a patent license that's more specific to the software they are licensing, e.g. React, Prepack, rather than a blanket license for any patents they ever get. For example, read the Apache 2 license, which is my preference for licenses that need a patent grant, as it's much more specific. You also assume that the strategy of getting patents is to use them to sue, which isn't true. You can get patents without suing, and the likely only reason you'd sue is if a company is willfully infringing upon a patent (e.g. won't license the patent).


> I don't feel like it is fair that the company above would not be able to use React because Facebook infringed on a patent unrelated to React.

Again, keep in mind that in your example, the company still has a license to use React. What you're losing is your explicit grant of a license to the patents Facebook MAY have on the technology. But nobody has ever identified such a patent, and one of the core React devs is on record as saying he isn't aware of any either.


Previously, when absurd patents first (see edit) as far as our sight alerted us, the history is of course very long -- we placed all our public facing work in a arms length company. Down to the company website.

I never imagined, then (being late 90s) that a brochureware dot com might require liability boxing legally, but I never imagined whole libraries like React being adopted so wholesale, so seemingly blindly (if not blindly, why has nobody posted the outcome of due diligence? A blog topic I'd like the attention from.) and so trivially as, yes, our brochureware site just could get used against us. Our early clients expressed concern, first: we had to pass their IP hygiene checks. We adopted those, right away. Such stringencies are why many entities I've worked with, have no or empty websites, but old old registration dates on their dot coms...

* edit, missing dependant clause I guess was strongly implied, but it's about when we first saw frivolous, vexatious patent suits aiming at the front doors of random web presences condition there might be money and weak legal... I mistakenly imagined that to be spurious, not it would develop into a psedo- legitimate"industry".

A moral thought: Maybe if we sold fewer reinvented wheels, there might be less temptation for parasitic behaviour to organize itself, like patent trolling has done? I can certainly plead the fifth to wheel reinvention, most days I write code ... (edit last) my meaning behind this is that I frequently have found highly objectionable behaviour being justified by the calling out of perceived comparable poor behaviour. Now I don't go sp far to condemn e.g. the js library crowd or any of anything for that matter, it's too young to blame yet. But with the first generation who grew up exposed to computing comparable to modernity, now maturing, its natural the industry will mature also. I see the brake on maturation more as lots of great new tools, than moral or human lacking. But self appointed grown ups trying to bully tax us, might be the natural parasitic compliment to this rich novel ecosystem. We don't need no random adults to appoint to combat this, we just need to question and talk about what is sane to accept. There's too much over reaching paternalism propping up big business assumptions, right now.


This is cool–it's worth mentioning that you might be trading runtime performance for bundle size though, here's a contrived example to demonstrate: http://i.imgur.com/38CR3Ws.jpg


It's in beta stage, so that's not unexpected. They can easily add a cost function later that considers multiple parameters.


Keep in mind that everything is gzipped nowadays, so it may not make a big difference in network usage. Although it is still likely to cause some memory overhead


No, I have done and pubpished an article about how gzip works with JS and the result will compress pretty well. Not better than the code, but I would guess within the same order of magnitude.


JavaScript parsing is still a huge bottleneck. 1mb of it will still take 1 whole second to parse in V8 (note: just to parse it, not actually evaluate or run it!).


I think this is (so far) for snippets. I agree otherwise though


This has promise but still needs more work. I added one line to their 9 line demo ( https://prepack.io/repl.html ) and it ballooned to 1400+ lines of junk:

    (function() {
      function fib(x) {
        y = Date.now(); // the useless line I added
        return x <= 1 ? x : fib(x - 1) + fib(x - 2);
      }
    
      let x = Date.now();
      if (x * 2 > 42) x = fib(10);
      global.result = x;
    })();
I understand Date might not be acceptable for inner loops but a lot of my code that deals with scheduling would benefit significantly if I could precompute some of the core values/arrays using a tool like prepack.


It's not a useless line, because prepack has no idea what Date.now() does (there are no guarantees in javascript that it hasn't been replaced with another function). It might mutate some global state somewhere, so the resulting code needs to call Date.now() as often as it would've if fib(10) was called. Basically the output is the unrolled version of the recursion, which cuts down on function invocation (comparatively expensive in dynamic languages).

If you replaced the line with say: `var y = 4;` you'll notice that it has been optimized out.


> Basically the output is the unrolled version of the recursion, which cuts down on function invocation (comparatively expensive in dynamic languages).

You're right about why it's included, but this is a bug, not a feature. If this is a hot function it'll be optimized. The recursive call is a known, non-dynamic invocation so the function location will be inlined leaving just the fairly low function call overhead itself. There's a reason nobody does arbitrary-length loop unrolls.

I would definitely bet that this is a bug to be fixed :) Right now you can see this if you call fib(20) in that example...the compiler times out while trying to unroll that far. Clearly that behavior won't stick around.


But should it still unroll the recursive calls and balloon the code out just because a mutatable function call was made, or should it leave it as an actual recursive function call.


"prepack has no idea what Date.now() does" isn't quite true, because there's a typeof check happening on the return to ensure it's a number, meaning that some assumptions are definitely being made.


Also the lack of variable declaration in `y = Date.now()` made y an implicit global, so it couldn't be optimised out no matter what. Even doing `let y = Date.now()` cuts out 400 lines.


Even better, it doesn't seem to ever give up on unrolling loops, this creates 60k lines:

    (function() {
      for(var i = 0; i < 10000; ++i){
        Date.now();
      }
    })();


Doesn't this make Prepack completely unusable? Surely it should try to apply its optimizations to each block of code, and for each block where the "optimized" version results in more lines of code - or perhaps better, more operations - the optimizations are thrown out, keeping that block of code untouched?

It makes no sense to use an optimization tool that does the exact opposite in cases it cannot handle. In no world is 1400 lines better than 10.


"Prepack is still in an early development stage and not ready for production use just yet. Please try it out, give feedback, and help fix bugs."


Well, if you have 1400 lines of 100 prime numbers each one, and a last line to retrieve one of them, if i ask for the 140.000 prime, that solution would be better (in terms of speed) than 10 lines making calcs for any prime number.

If you know the possible range of accepted inputs, 1400 loc would be better than 10.

One normally think much more loc is worst than a one line solution, but that's, like most answers, always depends.

And that's a simple case. There are lot of similar cases one dont realize.


Hi, I am Nikolai Tillmann, a developer on the Prepack project. I am happy to answer any questions!


Awesome work! Are there plans to incorporate Flow definitions as an alternative to the __assumeDataProperty helpers?

Edit: Looks like this is targeting compiler output, so Flow types would typically be gone by that stage. Integration with Flow could come via a babel plugin that emits `__assumeDataProperty()` when it encounters a convertible Flow type.


(can't edit my post any more)

I made a plugin to do this here - https://github.com/codemix/flow-runtime/tree/master/packages...


What will happen if you have bug in your function? Take the fibonacci function for example, what if you have a bug and created an infinite loop? Will Prepack terminate?


I was thinking something similar: how does Prepack determine that a function can't be optimized or hits an infinite loop? Seems like the good ol Halting Problem[0].

[0]: https://en.wikipedia.org/wiki/Halting_problem


The Halting Problem is a bit like Information Theory.

Information Theory, and the Pigeon Hole Principle in specific, says there is no algorithm that can compress all data. That doesn't mean that compression is a fruitless endeavor and we should never have written a compression library.

It means that you have to figure out if your input falls into a subset of data where the outcome of the effort is both tractable and useful. You compress text and structured data, you don't compress noisy data or already compressed data, because it's usually worse than doing nothing.

Similar thing with code analysis. If there is back branching or asynchronous code, you are probably going to hit the Halting Problem, so don't even try. But if the code is linear, then precomputing the output is tractable and useful.

You could also simply apply a budget. If an attempt to unroll a block of code exceeds a certain number of clock cycles or branch operations, you should give up and look at the next most likely scenario. The analysis you're doing might reliably halt in an hour, but who wants to wait that long? Especially when it's one of many such evaluations you'll have to do per build or per day? Just give up and keep moving.


GCC doesn't crash if you tell it to unroll loops with an infinite while loop in your code, does it?


But you can get into infinite loops with templates!


This is fucking impressive!

Just evaluated this:

   (function() {
   var a = 'a';
   var b = [a, 'b', 'c']
    .filter((i, index) => i.charAt( 1 - index)  !== 'b')
    .map(i => i + '!')
    .join(',')
   console.log(b)
   })()

BTW how you avoid going to infinite loops?


Another question: Does this use Flow types to optimize things like object member properties?


That is not surprising, your filter/map/join only use constants, it probably does an eval().


Thoughts on supporting source maps or something like it? It's very common to use source maps on production code when debugging live site issues and it's nice to have that translation capacity.


Webpack handles sourcemaps, and one of us, gajus, has already integrated it. Prepack should be just another stage in the build pipeline.

Having said that, there appears to be a sourceMaps flag in the Prepack source: https://github.com/facebook/prepack/blob/57cc59c07d164e4827c...

I will be attempting to use this as a simple black box, relying on Webpack for features that are only tangentially related to this new optimization stage in my build stack.


Can it be plugged into webpack for production builds? (I know it's not production ready yet)


Yes, as of about 25 mins ago - https://github.com/gajus/prepack-webpack-plugin


Hah, nice. Love it.


Are you seeing a noticeable improvement in peak performance using prepack or does it mostly help improve initial performance and warm up?


Currently, it's really about the initialization phase of the code.


How do you validate your program transformations? In school I wrote a haskell to clojure translator that also does AST transforms (some of which unpack syntactic sugar). However, my only way to validate the transforms was that the input program passe the "same" unit tests as the output program. Do you have any insight into this?


Hi! Judging from the comments only this looks to be a great tool. Good job!

I'm a developer currently learning more backend and JS. I hope this question doesn't come out as arrogant. I heard once that "a good compiler should be able to compile itself". Does prepack prepack itself to run faster or can it just for the fun of it? :-)


Not quite. This PR is almost ready to do it.

https://github.com/facebook/prepack/pull/397

Currently it is blocked on Map/Set support in the output which is an outstanding issue. We could also try it with a Map/Set polyfill.

We're very close to being able to though!

In fact, this isn't just Prepack itself but it is Prepacking the JS part of the entire Node.js runtime as well!


Can you provide any indications of the size of performance improvements?


To really see the benefits, you need to use it an environment where the parsing overhead gets cached. In terms of pure execution time, we see improvements of up to 10x for the initialization phase.


Thanks for posting here Nikolai.

In terms of the development status, does the current codebase generally work with React?

I'm happy to throw this at my codebase and report bugs, but whether React itself (which is ~95% of my bundle size) is expected to work (unlike some other optimising products) would say a lot about what the expectation might be.


It's a brilliant tool, bravo! Why did you go down that road instead of heap snapshot? Portability?


Conceptually, Prepack would go together well with heap snapshots, as Prepack figures out what is the concrete heap, and what are residual computations. If we tweak Prepack to completely separate these two aspects in the generated code, then you'd heap snapshot after the concrete part has been build, and still run the fixup code at runtime.


It doesn't seem like this tool supports global level objects.

For instance, a common pattern being:

(function(w) { console.log(w.location); })(window)

This results in issues because it's unaware that window would contain other methods. Is it possible to exclude certain objects in this case?


Do you have any papers you recommend to learn more about symbolic execution? Can you describe a high level overview of the process while citing the relevant source files in Prepack? Cool stuff!


Do you have any performance benchmarks?


What kind of symbolic values does Prepack have, and what kinds of constraints on them does it understand?


I used Pex a lot, very cool to see your next project is _also_ PL Magic :)


Can you prove that any of the optimizations in your docs aren't already done by V8? I agree with the other commenters in this thread-- V8 likely does these already. You have an extraordinary claim, which requires extraordinary proof.


The whole point is to not let V8 do any job at all.

V8 == user impacted.

Compile time V8 == compiler impacted, users happy.

It's like interpreting what you can during compile time, with caching capabilities.

To speed up boot time and init time, not runtime per say.


This. By transforming and evaluating things that really could be constants beforehand, V8 has less to do when it goes to run the javascript.


V8, like all tiered JITs, doesn't do many optimizations unless the code to be optimized is hot. This is the right thing for V8 to do, because most code on the Web is cold and so it's better to just run the code rather than sitting there optimizing it. But it does mean that there are a lot of optimizations that would be profitable to perform in the aggregate that are nevertheless unprofitable to do at runtime, because the time spent to optimize outweighs the running time of the unoptimized code. AOT optimizers like Prepack can solve this dilemma, by doing expensive optimizations ahead of time so that doing them won't eat into the runtime of the program.


That's not all, there's the cost of (down)loading the dead code, and startup cost for loading the unoptimised cost. Even more critical when you consider environments where you can't JIT, like React Native on iOS for instance, which was one of the many motivations for Prepack AFAIK.


It's much better to do this earlier. If you're asking v8 to optimise, say, the whole of React on a mobile device before first paint, that's definitely going to be slower than having the same optimisations done before V8 needs to parse/optimise it!


The examples are very far from the JS I see and read, but this is definitely a very useful tool. It seems like gcc -Olevel. It would be interesting to incorporate some sort of tailoring for JS engines into this, like how a compiler might try to make x86-specific optimizations. For example, if you know your target audience mostly runs Chrome (or if the code is to be run by node), you might apply optimizations to change the code to be more performant on V8 (see https://github.com/petkaantonov/bluebird/wiki/Optimization-k... for example).

I love it and can't wait to use it on some projects!


The examples seem more typical of Coffeescript output.


You've mentioned that a couple times, but I'm really not seeing it. What to you looks like coffeescript there?


"Coffeescript output" is javascript so it doesn't look much like coffeescript. Presumably GP doesn't like the specific javascript idioms to which coffeescript transpiles.


A long time ago there was a theory about using Guile (the GNU Scheme) as a general interpreter for languages using partial evaluation: you write an interpreter for a language in Scheme, use a given program as input, and run an optimizer over the program. This turns your interpreter into a compiler. I played around with the concept (making a Tcl interpreter), and it even kind of worked, often creating reasonably readable output.

Prepack looks like the same kind of optimizer – it could be a fun task to write an interpreter and see if this can turn it into a compiler/transpiler.


What you're talking about -- writing an interpreter that's optimized into a compiler -- is actually coming in the soon-to-be-released Java 9. Check out Graal and Truffle. I'm pretty excited to play with it at some point.

Prepack reminds me more of a "supercompiler," because it focuses on partial evaluation rather than optimization.


Your "supercompiler" phrase made me think about what happens if you start applying the partial evaluation over and over. Of course nothing happens... unless you know something more than what you knew before. Which you might! That in turn made me think of the Wolfram Language, which feels like this to me – you declare things, and as the set of declarations continues the language starts to "know" more things, and your statements become more concrete. This is interesting because it's all automated, you can undo things, change them, implicitly loop them by considering multiple possibilities.

I'm not sure you could take Prepack and do this. But it sure seems interesting. A kind of partial evaluation coding notebook... not so unlike a symbolic spreadsheet I suppose.


Pypy does it a lot like that.


RPython, to be more accurate


This should have a big impact on the "cost of small modules", as outlined here:

https://nolanlawson.com/2016/08/15/the-cost-of-small-modules...

Which is to say, one of its most effective use cases will be making up for deficiencies in Webpack, Browserify and RequireJS. Which I'm a little ambivalent about - I wish we could have seen improvements to those tools (it's possible, as shown by Rollup and Closure Compiler) rather than adding another stage to filter our JavaScript through. But progress is progess.


I saw a Webpack example and it looked smaller & faster. So it seems like a good thing.


  function define() {...}
  function require() {...}
  define("one", function() { return 1; });
  define("two", function() { return require("one") + require("one"); });
  define("three", function() { return require("two") + require("one"); });
  three = require("three");
--->

  three = 3;
There is a certain irony that now it's possible to do optimisations like that in javascript - a dynamically typed language with almost no compile time guarantees.

Meanwhile java used to have easy static analysis as a design goal (and I think a lot of boilerplate is due to that goal) but the community relies so much on reflection, unsafe access, dynamic bytecode generation, bytecode parsing etc that such an optimisation would be almost impossible to get right.


it's possible to do such optimizations for a (safe) subset of javascript, such as these pure functions.

Arguably java has a larger subset even today.


I think the remarkable point in the above optimisation was that the non-pure functions define() and require() were also subject of optimisation even though the optimizer had no special knowledge about them. Using symbolic execution, the optimizer nevertheless was able to reason about them.


A webpack plugin for prepack. https://github.com/gajus/prepack-webpack-plugin


How "safe" is it? I'm thinking, for example, of Google's closure compiler and the advanced optimizations, which can break some things.

Or roughly, if it compiles without errors, is it safe to assume it won't introduce new bugs?


It's not yet ready for production, so there are some bugs and cases where we should reject a program but don't do that yet.

Having said that, it's quite safe, but won't be undetectable. Code using eval could detect injected identifiers, we don't currently aim at preserving function names, and the method bodies you get with toString() are altered. That should be roughly it.


The real litmus test: Does FB use it in production yet?


Has this been tried on popular JS packages such as jquery?


I was under the impression that V8 and the like are so optimized that this would give marginal gains. Would love to be wrong though. Do you have any performance benchmarks?


One of the examples contains a function to calculate the nth fibonacci number, and then a variable uses the result of that function.

Prepack doesn't seem to optimize for V8 in particular; rather, it just does some calculations ahead of time. V8 cannot possibly do these optimizations. How would it know that e.g. my code tries to calculate PI to 1000 decimals? That's where prepack steps in and calculates PI to 1000 decimals for you during your build-step, so that the JS you ship doesn't have to do that calculation.


The benefit of this would be at runtime when implementing interpreters:

> experiment with JavaScript features by tweaking a JavaScript engine written in JavaScript, all hosted just in a browser; think of it as a "Babel VM", realizing new JavaScript features that cannot just be compiled away

I've been playing with making toy languages inside of Javascript, and I believe there's lots of untapped power there. The paradigm battles don't have to be: we can run all sorts of paradigms in the same VM, with data executing as code. This means that you can decompose expressions to see where they came from (e.g. the steps in a state machine that yielded an evaluation result). If you believe (as I do) that invisibility-by-default is one of the greatest pain points in the history of computing, then these sorts of approaches are essential.

The problem with doing that is that some things will be slower than they would in "native" JS. I've been proceeding anyway and thinking that could be dealt with later. So I'm bookmarking this, because it attacks that exact problem: runtime-compilation of generated AST's.

The name is a little unfortunate, in that respect, though, especially since it will make people think it's a build tool like Webpack. Webpack is for dead fish. This is incomparably more powerful.

edit so to answer your original question, this would piggyback on the optimizations of the VM (V8 or SpiderMonkey, or whatever), taking for granted that JS which is not needlessly verbose (as generated code must sometimes be), can be run nearly optimally.


> Webpack is for dead fish.

I haven't used webpack but I have used many similar tools, and I've heard a lot of praise for webpack. Can you elaborate on why you dislike webpack so much?


I'm using "dead fish" in the Bret Victor sense.[0] It's a static tool by nature. By the time you're in a live environment, webpack is gone.

[0] https://vimeo.com/64895205


Oh. Isn't prepack also a "dead fish" then? Like you said, "It's a static tool by nature. By the time you're in a live environment, [prepack] is gone." There isn't any sort of prepack runtime


I was assuming that the intention was to eventually provide runtime support. If that's correct, then I think we'd be in agreement about what "static" means?

Babel and all sorts of other compilers can run in the browser. Whereas webpack is mainly concerned with the text that you transmit and could not "by nature" be made to do its job in the browser (because then it would be too late). That's the difference I had in mind.


Reducing this initial work would really help startup time - especially on mobile. Consider that v8 etc are really fast only after the JIT kicks in after seeing repeated work: for time-to-first-significant-paint that's too late.


I see, that makes total sense and I've run into that exact problem in my React Native app. Global defines take seconds to be defined initially since they need to execute functions. Would still be interested in benchmarks tough


A good resource to have info about this initiative: https://www.youtube.com/watch?v=xbZzahWakGs


Thanks!


Without reading further. I'd say yes and no.

No for client javascript. Compilation is a slow and intensive process. You don't have the time to optimize the mountains of code loading on every new page. It makes the page much slower to load. The returns are negative if all the code executed turned out to be a single function to flash a menu, before the user left.

Yes for server javascript. It's long running processes, web servers should take time on startup to perform JIT compilation. It's beneficial in the long run.


This happens at compile time. Not at runtime. This happens on the developer’s computer, not the user’s.


Awesome project, the performance gains seem real, but why wouldn't these optimizations be happening at the javascript JIT level in the vm? (serious question)

React / javascript programming, is the most complex environment I've ever dug into, and it's only getting more complex.

create-react-app is great for hiding that complexity until you need to do something it doesn't support and then it's like gasping for air in a giant sea of javascript compilers.


Because they'd have to happen every time for every user, so there'd be no real gain there. It has to happen as the code is published in .min.


download js -> run through JIT compiler -> execute

vs.

run through JIT compiler -> download js -> execute

Whatever overhead the JIT compiler adds will be latency for the user.


In the second case it's AOT (ahead of time) not JIT (just in time) compiler.


Very interesting, nobody mentioned how formal and quite technical this README is, it goes really into details about what it does, and even future plans laid in three sections across 30 bullet points. One bullet point in the really far future sections said "instrument JavaScript code [in a non observable way]" emphasis mine, that part was noted in several other bullet points. It seems to me every compiler/transpiler/babel-plguin changes JavaScript code in a non observable way, no? Just a theory, but that undertone sounds to me like the ability to change/inject into JavaScript code undetectably on the fly in some super easy way.

Just another day at Facebook's office...


Your tin foil hat can go away when you consider that you're instrumenting code that will permanently run in an observable sandbox. You control the server (for node) and can watch what external requests it's making; and while you don't directly control the client, you do have access (via the web inspector) to all external requests it's making.


This is exciting, and has a lot of potential to significantly improve JS library initialization time.

I wonder if this is the same project[0] Sebastian McKenzie previewed at React Europe 2016?

[0] https://www.youtube.com/watch?v=xbZzahWakGs




What is the business model for a tool like this? Who has the resource to spend man/years of work while also create such a fantastic, simple yet comprehensive landing page?


It's by Facebook. They have 1.x billion users across their web and mobile products all of which run JavaScript. Tiny improvements in product performance equals big $$$. Easily justifies efforts like this. And the rest of us get to free-ride :-)


On the very bottom of the page it says "Facebook"


It's a Facebook project.


Coming from a non-CS background, I've always wondered why you can't "convert" code from one framework or paradigm to another. For instance, converting a project from jQuery to React. If you can define the outputs, why can't you redefine the inputs? That's what it seems like this project does... I suppose converting frameworks would be a few orders of magnitudes harder though.


Can you turn lead in gold? Maybe using nuclear transmutation. But the potential cost of the process seems higher than the benefits.

Can you convert a code base to React from Angular? Maybe, but the effort to write this converter is higher than rewriting the code base.


The problem is not converting; the problem is converting in a way that the result is usable by a human.


Different design patterns


Facebook's javascript / PL game doesn't disappoint. This is awesome!


I'm happy to see there's an option to use this with an AST as input, more tools like this should follow suit. Hopefully it can then push us to a place where there's a standard JS AST such that we don't reinvent that wheel over and over. Babel seems to be winning here, but I don't think it matters so much which one wins so long as any one does.

This tool looks interesting, particularly the future direction of it, but I'm weary about efficiency claims without a single runtime metric posted. The claims may be true, initializing is costly, but so is parsing huge unrolled loops. For an optimization tool, I'd hope to see pretty strong metrics to go along with the strong claims, but maybe that is coming?

Interesting work, nonetheless!


Pretty cool, it did not make much difference in my application size, as it has very little static data in it. It seems pretty rare to do something like:

    fib(2);
and more common to do:

    getInputOrHttpOrSomethingAsync().then(function(a){fib(a)});


Not a comment about the tool, which looks cool and well done.

It's sad that there are developers and projects who write the type of code that causes these sorts of performance trade offs. I stopped writing this kind of fancy code a long time ago when I realized it wasn't worth it. You're just shooting yourself in the foot in the long run.

I think static analysis performance optimization tools are great but a certain part of me thinks it just raises the waterline for more shitty code and awful heavy frameworks that sacrifice the user experience for the developer experience.

"Just run it through the optimizer" so we don't actually have to think about what a good design looks like...


Evaluating constant expressions at compile time is something any good compiler will do. It's not bad practice to write something like "60 * 60" in code instead of 3600, if the underlying computation is important. At the same time, performing a multiply at runtime is unnecessary and wasteful. This just gives you in JavaScript what any decent C compiler has been doing for ages.


Every time someone says "optimizers are harmful, just write fast code to begin with" I ask if they've written "x / 3".

Optimizers are there to help you write maintainable code instead of unmaintainable micro-optimized spaghetti. Use them.


> I stopped writing this kind of fancy code a long time ago when I realized it wasn't worth it

I'm not sure what you mean by "fancy code." I feel like it's the opposite. We often try to write code that is readable and maintainable, and has as few as possible "fancy" magical constants littered throughout it, but I'm happy if a compiler wants to optimize it.

If I have to specify some variable, e.g. `frameRate = 60` and then I have a whole bunch of other variables throughout the code that depend on it, then I'm going to reference framerate everywhere: `framePeriod = 1/frameRate; frameBufferSize = frameRate * 100` or whatever. This way if we change our specs, I have a single number to change, instead of dozens of numbers that all depends on each other.

Similarly, if I'm calculating the volume of a sphere, you bet I'm going to put `v = 4/3 * Math.PI * Math.pow(r, 3)`, because it's a meaningful statement that everyone will understand, without trying to get "fancy" by optimizing it.

But should a compiler optimize it? Heck yeah.


I disagree with the premise a little bit. A tool like this could allow you to write code which is better suited for a human to read/maintain, but then still get the performance optimizations of pre-computed results being sent over the wire.


There is some confusion in this thread about the purpose of this tool, which is targeted at generated code—specifically, code generated by other compilers. In order to get language features that don't exist in javascript, code has to be generated in a more or less context-agnostic way. This (as I understand it) brings more context to bear, to reduce the cost of the new abstractions for your specific usage.


Is that right? The Scala.js compiler emits none of that "run-time metaprogramming" boilerplate that Prepack is good at optimizing away.

In general, I would expect from a compiler to JS not to emit performance killers like that. Any decent compiler to JS should be able to remove its own crap on its own ;)


Why improve the compilers when we can have yet another tool to layer on top of our ever growing tool stack.


I take it you've never looked at a modern C++ or similar build and execution chain under the hood then?

There's a LOT of parts that do different things. They may be aliased under one command (with tons of flags) but a modern system does a lot of stuff.

For modern JS we appear to have:

- Linters (ESLint, etc)

- Transcompilation to Object Code (Babel, etc, transpile to JS)

- AoT Compilation (this & Closure Compiler, etc, do optimizations on the code ahead of running it)

- Recompilation (AST based compression - like Uglify)

- Compiling (the actual JS VM, V8, etc)

- JIT (in the actual browser)

None of these steps are alien to other build chains.


Ah that makes sense. I was wondering why the examples all looked like Coffeescript output.


Maybe someone can try it on a large AOT compiled Angular2 project?


> It's sad that there are developers and projects who write the type of code that causes these sorts of performance trade offs. I stopped writing this kind of fancy code a long time ago when I realized it wasn't worth it. You're just shooting yourself in the foot in the long run.

I respectfully disagree :)

I stopped writing excessively terse, 'overly clever', borderline 'minified' code a long time ago when I realised it wasn't worth it - it isn't readable, and it certainly isn't maintainable by my future self, let alone others. Maintainability is generally more important than saving a few cycles. Abstraction does not necessarily === "shitty code" - abstractions help us to build a mental model and reason about what the code is doing.

Having said that, I certainly see where you're coming from - abstraction can be overused, taken to the nth degree such that it's almost impossible to work out how anything actually works, and I do rather loathe such excessively heavy frameworks. But Prepack can help optimise the good frameworks that use just enough abstraction.

On the face of it, Prepack gives us the best of both worlds - developers can write maintainable code, and Prepack can optimise it a bit for us. Others here have menioned it too, but I see it as akin to the same sort of optimisations that compilers generally do.


What percentage of typical code is packable like this? What I really need is a way to easily determine, "is it worth bothering with a tool like this?"


This looks very good indeed but the lack of initial data model very severely limits the production usability of this tool. You can't use "document" and "window" ...

It's the same problem TypeScript have/had that for external libs you need definition files for it to work. Now if we had TypeScript-to-assumeDataProperty generator that would be VERY interesting!


This is a very early release. I'm sure there'll be a "build for the web" mode soon enough.


I think that just in time compilers are better at doing thier things. Sure it is nice project that can interpret and print preprocessed js, but I think it might in fact not bring speed in most cases.

And the current state doesn't even know how to constant fold this loop.

function foo() { const bar = 42; for (let i = 0; i <= bar; i++) { if (i === bar) { return bar; } } };


As long as the ahead of time optimizations that dont have a large enough negative drawback in another criteria (speed vs size) are welcomed by me, even if the JIT can do the same optimizations too. Regarding your code example, maybe prepack was changed in the past week and a half, but it is folded quite fine when i tried it:

  (function() {
    function foo() { const bar = 42; for (let i = 0; i <= bar; i++) { if (i  === bar) { return bar; } } };
    console.log(foo());
  })();


> helps make javascript code more efficient

https://github.com/facebook/prepack/issues/543

Are you sure?


This reminds me of Morte, an experimental mid-level functional language created by Gabriel Gonzalez. They both seem to be super-optimizing, that is partially executing the program in question. Of course it is a great deal easier to do in a functional language than JavaScript.

http://www.haskellforall.com/2014/09/morte-intermediate-lang...


I wonder what this would do to Purescript code?


I did an experiment to look for synergy from combining Prepack with the Closure compiler: http://www.syntaxsuccess.com/viewarticle/combining-prepack-a...

The result was pretty good.


I want something that can separate my code into what can be precompiled into wasm and what has to stay in JS. maybe just insert comments so i can see what needs to be done.


I can't get anything to work in it. Just for fun I put the not minifed vue.js source inside and I get:

null or undefined TypeError at repl:537:23 at repl:5:16 at repl:2:2


Has anyone used this with webpack + reactjs ?



How does one measures performance improvement for a web page gained from such tools?


Just throw your webpack bundles in and be amazed.


How does it compare to Google's closure compiler? It is considered by many best in class. It understands the code (uses Java based Rhino Javascript engine), while most alternatives (UglifyJS & co) just monkey patch things. You can trust the Google's closure compiler output.

Edit: @jagthebeetle: have you tried "advanced mode"? (One should read the documentation before using it, it's really a game changer but requires one to read the docu first)


This was my immediate question. For want of a more rigorous test, the online closure compiler service (https://closure-compiler.appspot.com/home) failed to produce equivalent or shorter output for all but the first example.


Closure compiler optimizes code size, while this optimises code execution.


I see that it claims that, but that's not entirely true. The closure compiler does a variety of perf optimizations as well (e.g. inlining).

Closure is really a great compiler, it's just a shame it doesn't interact well (that is, at all) with the modern JS ecosystem.


As someone who works on Closure Compiler, this is one of my biggest gripes with the project. Things are getting better though! CC now supports node's module resolution algorithm. It works pretty well with Es6 imports but not so well with CommonJS (mostly because the exports are impossible to statically analyze).

Within Google, CC is heading towards being a an optimizing backend for other less painful languages such as Typescript (tsickle) and the yet-to-be-released J2CL compiler.

CC does pretty well with these examples (our debugger is not quite as flashy): https://closure-compiler-debugger.appspot.com/#input0%3D%252...


> CC is heading towards being a an optimizing backend for other less painful languages such as Typescript

I actually considered musing about something like this in my post. Typescript support would be very interesting.

Fwiw, the tooling support around the CC has been a problem historically. We relied on Plovr for a long time, but eventually it fell unmaintained, and there wasn't an alternative for a lot of the relevant parts (e.g. gathering source files you care about). Some important dev features also just didn't quite work as intended (sourcemaps) for a long time.


[flagged]


From the guidelines:

> Please resist commenting about being downvoted. It never does any good, and it makes boring reading.

https://news.ycombinator.com/newsguidelines.html

Disclaimer: I do not work at Facebook.


While I appreciate your vigilantism (given you aren't actually a moderator nor do you work at FB), the real elephant in the room is that HN really needs to a better solution for vote collusion. I am already familiar with the guidelines, and I usually follow this rule to a tee, but the optics here are damning.

What I'd really like to see is this discussion steered back on topic. It was originally interesting and productive. I like talking about web technologies; it is my passion.


Frankly I only downvoted because of the unfounded accusations of Facebook shilling. There's a certain level of pretentiousness in thinking one's comments could only ever be downvoted because somebody was literally paid to do it. And that attitude rubs me the wrong way.

To bring it back to the discussion itself, I'm also a huge fan of web technologies, so I'm always glad to see further optimizations being made.

I'm hesitant to use it now as it's still so early, but would like to see it mature so it can be included as a step in our gulp/grunt build processes.


Continuing the discussion means not opportunistically taking a cheap shot first. :(


You're right, but I also didn't realize until after I'd commented that I was replying to the same author as the parent poster.


Baseless accusations of collusion and shillage aren't OK on Hacker News. We care a lot about vote quality and we're more than happy to take a look at specific cases if you have concerns, but please email us at hn@ycombinator.com so that the discussion here can continue.


Sure. I emailed my feedback to you, thanks.


Prepack is quite complementary to traditional compilers. It's strength is that it comes with full knowledge of the JavaScript built-ins, and it uses that to pre-evaluate the code at compile time. In an extreme case, an entire program can get reduced to the final result.


Right, I tried editing my post a bit more to make it clearer:

Added the adjective "optimizing" to "compiler" in the first sentence (I meant something like LLVM opt, what you get when you -O3 your Makefile -- constant folding, loop unrolling, dead code elimination, compile-time constexpr function evaluation -- your extreme example being running the entire program at compile-time).

Added quotes around JS "compiler" -- what your run-of-the-mill JS developers call "compilers" aren't actually real optimizing compilers, even though the JS world would say otherwise, but I digress. Yours is the first to even come close.

Your approach is subject to all sorts of breakages. You mention the lack of "environments" in your documentation which means Prepack out-of-the-box is completely oblivious to the browser itself -- the DOM, etc. What really is annoying is if Fib(6) gets replaced with 8 (including the whole definition of Fib itself, then I can't pop open my console anymore and type Fib(3) -- nor can I bind a UI control to a textbox that evaluates Fib(text) in the box).

An actual working compiler takes in account the "execution environment" as a whole. You can't Fib(x) Linux into oblivion.


How much marketshare do you think that Javascript will lose once WebAssembly gets mainstream?

Has it evolved so much that it's now a language you would pick out of free will? With ES6, I'm hearing that it's actually usable and kinda nice, but every time I use JS for something instead of Python, I get hit with things like the weird typing that you just can't change.


"How much marketshare do you think that Javascript will lose once WebAssembly gets mainstream?"

I'm no fortuneteller, but I suspect that people writing the most demanding client applications for web will be very happy when it does.

Browser-desktop convergence wasn't meant to happen through JS, and after the proliferation of the hacks that we have today to get around inherent design flaws in the language, it really shows.

"Has it evolved so much that it's now a language you would pick out of free will?"

"Free will" is a very good way of putting it. I suspect some Node developers were ex-frontend devs that just didn't want to learn another language. Moreover, the realities of project management, human resource allocation, and fast iteration are the forcing function behind using Electron instead of building a real desktop app.


> are the forcing function behind using Electron instead of building a real desktop app.

I have to work hard to not fall into this way of thinking most of the time. I am as frustrated as you with the proliferation of Electron apps. But a lot of the time it boils down to:

Application X would not exist if Electron did not exist. Cross platform dev is hard. More programmers, no matter what language they use, is a positive thing. Lowering the barrier to entry via things like Chrome Dev Tools, Electron and related technologies is a good thing.

After all, how many people are still using the first programming language they started with?


Right, that's a bit more descriptive way of what I meant when I said "human resource allocation."

> "After all, how many people are still using the first programming language they started with?"

Is the answer to your rhetorical question here supposed to be "a lot"? Because anybody who has programmed a long enough time has had to switch languages to keep up with ever-evolving technology.

You're right, a lot of people on HN who started coding around 2013 when the web/mobile bubble started inflating learned JS and haven't touched anything else.

If you've been coding since the 1990s or even 2000s, JS was definitely not your first language (ECMAScript wasn't anywhere close to being capable of doing the things it is now up to even 2006), and so you're likely equipped with the necessary skill set to build a real desktop app.

So even if you learned JS as your first language, it's not TOO unreasonable to be expected learn something else like Python, C#/.NET, or C++ that is a bit more suited for desktop app development.

But, yes, when you have a market flooded with JS developers and bootcamp grads, THAT is the forcing function for using Electron because C++ developers are now VERY tough to hire. They're all doing the most hardcore infrastructure projects at the likes of Facebook, or they're working on bare-metal performance optimization at NVIDIA, or they're working on ML infrastructure for self-driving cars, or they're doing high frequency trading in New York.

Your run-of-the-mill seed-stage startup just doesn't have the budget for someone who knows C++ really well because even the big companies struggle finding and hiring them. With things like pointers and DIY memory management, C++ programming is HARD and easy to shoot yourself in the foot by making simple mistakes that aren't even possible to make in higher-level languages like JS.

Luckily, there's plenty of other languages out there for building real desktop apps that aren't JS or C++.


> More programmers, no matter what language they use, is a positive thing. Lowering the barrier to entry via things like Chrome Dev Tools, Electron and related technologies is a good thing.

Why? Are more plumbers also a good thing, even though this reduces the average wage of the plumber and the quality of plumbing?


Because programming is an insanely powerful tool that can make so many peoples jobs easier and more productive. The more people that can use that tool the better.


This sounds a lot like the taxi driver argument against Uber/Lyft to me.


I was right there a year or two ago developing portable native client things for my project, Linux on the Web (I was able to get vim and python running).

Projects like that can pioneer a whole lot of new techniques and infrastructure that can be very easily applied to whatever technology is eventually going to win out.


@dennykane It's a shame seeing your comments are getting downvoted on here too. It really leaves me with a bad impression about how FB operates.

@rattray Even without access, downvotes don't just happen out of nowhere; you have to admit that this just looks too obvious to any reasonable person.

I'm curious what you thought was downvote-worthy of my original post pre-edit?

I get it, rules are rules, and this is a matter of principle.


Unless you have access to the HN database, there isn't a way to know the source of downvotes.

This kind of commentary is not welcome on HN.


Late to the party, but I suspect you were downvoted for your arrogant and condescending tone, combined with your sense of unsubstantiated victimhood.


I'm not sure how you can leap to conclusions about my tone of voice from textual comments? One user @rattray is even citing quotes from the rule book to me here, and somehow I'm the one being condescending? But hey, regardless of the situation, nobody deserves to be insulted like this! You can do better, man. I can give you the benefit of the doubt; I suspect you wouldn't ever act this way in person to anybody.

I also don't see how any sort of purported victimhood would be unsubstantiated (I just don't happen to take website comments from strangers all that seriously): I'm just trying to talk about WebAssembly, and this troll spends their Friday night posting these kinds of inflammatory comments on nearly-dead threads from the beginning of the week. :(

As a reminder, you're breaking the very first rule -- "be civil" -- since we're sticklers about the rules here. You are acting very rude!


Actually, it turns out that at the time you posted this, you couldn't even see the contents of my original post that was downvoted because it was flagged and hidden via the moderators, so I'm not sure how you can even comment at all on why it was downvoted in the first place.

Ridiculous leap to conclusions here!


The destination page looks uncomfortably like Webpack's.

Not the best idea, imho.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: