Hacker News new | past | comments | ask | show | jobs | submit login
Stop Writing JavaScript Compilers. Make Macros Instead (2014) (jlongster.com)
118 points by tosh 7 months ago | hide | past | web | favorite | 145 comments



Its not exactly equivalent, but having spent the last month debugging a nightmare of C++ template meta programming mixed with a lot of C macros... I think things like this can be nice if they’re applied tastefully but I wouldn’t trust 99% of programmers to be judicious with these things. As someone else said, it’s catnip to a certain type of coder. It also creates a really nasty implicit dependency — you can’t pull out individual components because they’re dependent on the DSL, and who knows what dependencies the DSL pulls in? I would actually argue that macros might be why Lisp didn’t become mainstream; it creates a lot of fragmentation. Most people don’t even want to learn the syntax of their build system, asking them to learn some weird language extensions they can’t take to their next job seems like a rough sell

Edit: one more thing I forgot to add: this is a nightmare for tooling. I know there’s the contingent that wonders why you’d ever want anything more than vim or emacs, but for the rest of us, this just means refactoring/code navigation/syntax highlighting/autocomplete/static code analysis are totally broken.


C/C++ macros are particularly horrible as they are essentially text replacement on the source code. Other languages (such as Rust) have macro systems which work with ASTs and can used in isolation without interfering with other stuff.

For example there's a library in Rust which provides a `json` macro with syntax like so:

    json!({
        "foo": "bar",
        "qux": baz
    });
where `baz` can be a Rust string or integer. Note: this will typecheck, so if you provide something weird as `baz` then it won't compile, but otherwise the syntax is identical to standard JSON syantax.


That still doesn't address the issue of macro syntax being foreign/not easy to refactor. See the nom library for an example.


That's a good point. The nom library definitely does exhibit a lot of the problems described above. There's definitely a ton of macros in the rust ecosystem that don't though.


Rust procedural macros are just regular rust that runs at compile time. You get vector of tokens in, and return a new vector of tokens out.


And how does that address this issue?


Procedural macros and macros are different things; nom is the “normal” macros (“macros by example”).

The initial procedural macro release is in 6.5 weeks. Nom has worked on stable for years.


I know procedural macros are different in how they work and more flexible, but the problem remains the same: you can't really look at any macro definition and go i know how to use this.

A vector of tokens is not the syntax a programmer can parse without the grammar definition that the macro implements. It's still foreign unless it is well-written to match existing language constructs.


Sure; to be clear, I'm trying to point out what I think your parent is getting at. I don't necessarily agree.


Oh, I thought you were talking about the issues as a macro writer. Now that I reread it, I think you're talking about as a macro consumer. Is that right?


Whats the canonical way to deduce the types in Rust from JSON? JSON Schemas?


This particular crate, serde_json, lets you create any type that implements Serde's Deserialize trait when you're parsing.


... and it also provides a dynamic interface too, if you need to work that way. You probably don’t...


‘Most programmers can’t be trusted with macros’ is a sentiment I read a lot, but at one point in time people might have said the same about functions, and arguably they could say the same about pointers or recursion today.

I think that being able to think symbolically about one’s code is a vital skill for a programmer (and, more generally, that the capability of thinking abstractly is a vital skill for a learnéd human being). Thus, if someone lacks that skill, perhaps he should probably not be a programmer? That may sound revolutionary, or elitist, but I think most of us would agree that someone who can’t effectively reason about a depth-first tree traversal will have trouble as a professional software developer; mayn’t that also be the same of someone who can’t effectively reason about computational structure?

Your comment re. macros & fragmentation makes me wonder if you’ve ever used Lisp’s macros: I think that they actually mitigate against fragmentation, as e.g. CLOS started out as macros atop Lisp and ended up being standardised as part of it. Lisp’s packages help manage namespace collisions, and thus prevent fragmentation by making differences manageable.

And Lisp tooling typically handles macros very well indeed.


> [...] but at one point in time people might have said the same about functions, [...]

I really like your example, because it nicely highlights how much of a tooling problem this really is.

To call a subroutine, you'll have to arrange your argument values on the stack, then do the call instruction, after which your target procedure pushes the return address to the stack...

Except that almost no one does it this way anymore. I usually never think about all this stuff when I call `foo()`.

Writing a macro in C/C++ is the equivalent of doing the whole legwork to call functions yourself.

What I find really surprising, though, is that we haven't ever arrived at the point where we think that "most programmers can't be trusted with arbitrary mutability". It's everywhere, and since the C/C++ days, it got worse with many languages. I absolutely hate the house-of-cards code I regularly deal with in C# because people would have to write a couple of additional lines to encapsulate the internal state of their types properly. So instead, you get mutable public members or properties everywhere... * sigh *.


I think the point being made is about incurring unwanted technical debt, rather than about the skills of programmers. A macro enthusiast can leave behind a codebase that requires substantial expertise or siloed knowledge to work with.

It's not that you can't trust people to make good decisions or invest in understanding what they're doing. It's that in real life people take shortcuts. Time constraints, deadline pressures, a brief skimming of a StackOverflow answer with no followup or further exploration - any of these can lead to code that does the job, is ignored until only one person is left who understands it, and needs to be parsed out and refactored. I think you can only trust programmers with macros (or functions, or pointers, or object-oriented programming) if you have good practices in place to handle technical debt (clean naming, clean separation of concerns, adequate testing, etc.).

As a side note: the other big part about macros is YAGNI. Macros are fancy, but I've yet to come across a use case for it that can't be replaced with a simple function call. I'm also curious about how one would test a macro.


> I'm also curious about how one would test a macro.

The same way you test any other kind of function call?


> Macros are fancy, but I've yet to come across a use case for it that can't be replaced with a simple function call.

For starters, macros are mandatory for out-of-order execution; anywhere you need arguments to be manipulated before evaluation must be done in a macro. E.g., a short-circuiting "or" function is impossible without being either a macro or built-in to the language.

Another common use case is efficiency. Any substantial computation you do at compile/eval-time never has do be done at runtime. I've worked on a Clojure router that compiles its nested route structure into efficient string-matching/regex code before it ever gets turned into machine code. Other languages either have to re-parse the routes, or find a way to cache a more efficient representation as part of an extra build step.


I'm coming around to the same feelings about dependency injection that many others express about macros.

In broad strokes, I really like that composition is increasingly being favored over inheritance. But I've also inherited some deeply Byzantine amalgamations of decorators and abstract factories that certainly do eliminate a lot of boilerplate. But they do it at the cost of sometimes making it very difficult to understand where certain behaviors are coming from.

The worst is, most of them are ones that I inherited from my past self. The specialized knowledge it takes to maintain an application that's been written this way apparently fades quite quickly.


the unique problem with macros (at least in the C/C++ world) is that they can be used to invent new syntax. Now when you join the company, not only do you have to learn the code, but you also have to learn the language extensions at the same time. If there's a lot of them, you're effectively learning a whole new language with likely minimal intentional design, with all the potential problems that a new language comes with (unintentional/hard-to-understand side effects, performance issues in the output, poor domain fit, etc).


For all the flack Java gets, it got something right on this topic imo. Take the lowest common denominator of language features (for a time), and have the solution to almost everything be code written with those very, very few constructs. Yeah, it ends up verbose as hell, but even though I haven't done Java since the early 2000s, I can look at our backend code (which is all in modern Java), and short of a few new things like annotations and lambdas, I can read it just fine.

The problem space is very, VERY well understood. The patterns have been bikeshedded to death almost 20 years ago. It's super boring, there's not many interesting left, and it's awesome because of it. Of course, software engineers hate this in general because they get way too much enjoyment out of figuring out how to save 2 lines of code for their own good ;)


There are two things about Java syntax that are my sore points.

Not having had the guts to make override a keyword, forcing the dumb @Override annotation. It is a matter of taste I know. But they could have made use of contextual keywords, thus keeping backwards compatibility.

I really would like to have a typename kind of statement instead of making empty classes to simply typing with generic code.


It seems quite intuitive that the complexity of code scales with the amount of build time logic it contains, whether that be C macros, JS transpilation, or whatever else.

I think if a lot of developers programmed with this mindset we'd be in a much better place. The best example I can think of off the top of my head is people choosing Vue over React "for it's simplicity". You're really just taking the little bits of ritual and reliance on JS fundamentals that React contains (which isn't that much in the first place) and moving that complexity into a DSL with a big transpilation footprint.

It makes sense from a "I want to play around and build features quick" perspective, but it's a horrid decision in terms of the complexity of the software and the number of moving parts it contains. Unless like many you (foolishly) believe the law of leaky abstractions doesn't apply in JS land for whatever reason.


I couldn’t agree more. I almost want to call it the iceberg metric. If 90% of your complexity is hidden underwater it’s still there, you just don’t notice it until your boat crashes into it.


But this is also how civilization is built. We need that 90% to be underwater just to build the remaining 10%.

Proper macros make that complexity much more accessible. That is, if things work correctly, you still don't notice, but if they don't, you get to easily fix them (though you still have to pay the cost of understanding it first).

Macros can be seen as incidental tooling reduced to only essential complexity. For comparison, a typical JS transpilation toolchain seems more like 80% of incidental complexity.


I definitely agree with this. I think there's obviously a point where the complexity savings from using a macro outweigh the inherent disadvantages. But I'd also guess that a large percentage of the programmers using them wouldn't know where to draw that line, and probably don't spare a single thought on it.

My point was more around instances where the supposed benefits of moving that logic to build time are superficial, which as you alluded to, is rife within the JS ecosystem.

I hate to pick on Vue too much, because for the most part it's a pretty good tool, but in their docs they claim one of the motivations for building Vue and using it over React was that it's more familiar to people who have built traditional web apps, because it leverages more html/css knowledge rather than JS knowledge (it doesn't, other than in superficial ways, by the way. But that's a discussion for another day). This seems insane to me. Just as insane as if you used a ton of C or Lisp macros to make the language look like Python, just because your devs were used to Python. I feel like in those communities you'd get called out for adding a bunch of accidental complexity and told to suck it up and just learn C. But in JS land it's just par for the course.


You also have hdom & co. coming from the i-am-not-a-framework side - https://github.com/thi-ng/umbrella/

Production size and evaluation times are a thing with react apps.


>template meta programming mixed with a lot of C macros...

The author cites Lisp macros. C macros, or C++ template metaprogramming, are to Lisp Macros as a 19th century locomotive is to a state-of-the-art MagLev train.

>I would actually argue that macros might be why Lisp didn’t become mainstream; it creates a lot of fragmentation.

This has no substance. Please take a look at how macros are used in Lisp.

>this is a nightmare for tooling

Not in Lisp.


You say “in lisp”. But which Lisp? Common Lisp or Scheme? And then which implementation of scheme? And which object system? See, that’s kinda my point. If I talk about Python or Java or C# we’re not even having this discussion, because they’re monolithic languages and features are core to the language, not built on top. On the other hand I can’t really move my code between Lisps.

And I don’t buy the argument that it’s just because Lisp has been around a long time. Python is almost 30 years old but there’s one canonical version. And seriously, I bet there are a lot of pythonistas that would change things about, for example, the object system if they could (explicit “self” comes to mind), but if they did that the community would fragment with some people using the original system and others using an alternative and it would be hard to integrate new libraries and know the rules of the module you’re working in. It’s a trade off no doubt, but it seems like most people have voted that a rich ecosystem of libraries without compatibility issues is more important to them.


That's why Lisp has a main standard. Common Lisp.

Most other languages, derived from Lisp, are incompatible - like Ruby is incompatible with Perl. Thus Scheme is called Scheme and not Lexical Liwp - it's now a new language family - the family of Scheme dialects - which are grounded in RXRS reports as language standard.

It's good that there is only one/two Pythons - but Python competes with a zillion other scripting languages - or they compete with Python. And they are not compatible at all.


Mmm, I think it's a huge stretch to say those languages were derived from Lisp. They certainly borrowed/stole a lot of features and were heavily influenced, but that's not the same thing. For instance, it would be way more accurate to say that Ruby is more derived from Smalltalk than it is from Lisp. (And not coincidentally, Smalltalk is a lot like Lisp in that it's a very minimalist language where most of the useful constructs are built on top of a very small core. And similarly it's also very balkanized.)

By the way, I don't mean to imply I don't like these minimalist languages with powerful extensibility like Lisp and Smalltalk. They're very elegant. I just think that there's a very large tradeoff to going that route. If I'm writing code for myself I love using Lisp or Smalltalk, but if I'm working somewhere and I have to debug someone elses code, I kinda want something boring and unsurprising.


> Mmm, I think it's a huge stretch to say those languages were derived from Lisp

I did want to say that. I said 'Most other languages, derived from Lisp, are incompatible' - that means the languages which were derived from Lisp - like Scheme, Logo, Dylan, Clojure, partially JavaScript - those are incompatible. Like Ruby, which was either influenced or in the same application niche (web programming, scripting, ...) like Perl is incompatible with Perl.

There is a similarity:

Lisp -> lots of incompatible dialects

Scripting languages -> lots of incompatible languages

But Lisp has with Common Lisp as standard, which has enable many different implementations with different capabilities, but a large shared core language.


Languages, macros, libraries, and apps are in descending order of how many people will use them and how tightly coupled they are to their problem domain. Thus they should be designed and created in descending order of experience and domain expertise. Very few people should create languages, a few more macros, etc.


I agree w.r.t. languages -> libraries -> apps, but I would say that macros are part of a library, along with functions, methods, datatypes, classes, interfaces, etc.

A typical library should usually define more functions/methods than datatypes/classes, more datatypes/classes than interfaces and more functions/methods than macros. The exception is libraries which only provide some particular macros or interfaces or whatever.


Macros are a kind of function that have the ability to hide a lot of things, including control flow, and often do, which often makes them unable to be composed. That's why I put them between languages and libs. They're a little less flexible than regular functions/methods/classes and they usually have tighter lock-in.


> ability to hide a lot of things, including control flow, and often do

They're still local transformations though, so I would think they're either:

- Choosing between their arguments somehow e.g. like `(unless condition body)` expanding to `(if (not condition) body)`, or `(with-foo body)` running `body` inside some setup/teardown boilerplate. In such cases we just need local reasoning (possibly after expanding the macros; but like functions, if you have to keep reading the definition to see what it's for, the abstraction it provides has been lost)

- Invoking other code (macros and functions); but functions can do this too, so such non-local reasoning is essentially the same.

A phrase like "hiding control flow" sounds scary, like we're implementing "GOTO" all over the place (or `call/cc` or something). Yet macros themselves can't do that; they could only do GOTO-like-stuff if there are GOTO-like constructs in the language, in which case (a) that's what's causing the underlying difficulty and (b) functions can presumably do similar shenanigans.

I must admit that I do most programming in Haskell these days, so I take it for granted that I can implement my own control flow using ordinary functions (thanks to laziness). That's just as well, since Haskell's macros (Template Haskell) are pretty gnarly; they're very useful on occasion though (e.g. I'm currently working on a script which reads in and parses a bunch of JSON from a filename (taken from an environment variable) at compile time, building a datastructure that I can query at runtime).

> which often makes them unable to be composed

Composed in what sense? I get that we can't e.g. pass macros to higher-order functions, and things like that; but I wouldn't say that's especially due to their use for control flow. I'm more frustrated by the general asymmetry between functions and macros, which can cause us to write the same functionality multiple times (I have the same frustration in languages which distinguish between functions and operators, functions and methods, statements and expressions, objects and "primitives", etc.).


> this is a nightmare for tooling

I self-inflicted some pretty awful debugging woes due to C++ macros a few years back. VC++ is incapable of stepping into/through macros - so a logical error in my macro had to be solved with good-old printf debugging. I learned my lesson about using macros sparingly.

This can obviously be alleviated with source maps. This presents a new set of dilemmas: does the developer want to debug the macro in its macro form, or its expanded form? Does the developer only want to debug a specific macro in expanded form, while keeping other macros expanded? How do you even debug a "case macro" in macro form? How do you debug the case macro code while it is expanding?


I am not so sure about the tooling situation, at least with Lisp macros you can run the macros and work on the underlying, generated code.


Exactly this. Given how a majority of JS devs tend to not learn from history and other languages, I would trust them with macros about as much as I'd trust a baby with a sportscar.


>but I wouldn’t trust 99% of programmers to be judicious with these things

The evil elitist in me wants to qualify that as "99.99999999% of JS programmers".


You've obviously not met many C programmers. I have a new general rule: if it confuses clang-format, it probably shouldn't be a macro in the first place.

The amount of time I hit that pulled out of my butt rule in daily code reviews is above and beyond what you would expect. Also what attribute of JS programmers would preclude them to abusing macros or templates more than c or c++ programmers already do?


> I know there’s the contingent that wonders why you’d ever want anything more than vim or emacs …

Well, we have tools that work. Why would we use tools that don’t work? It just seems weird to me, when a solution to a problem exists, to not use it.


After using Elixir I love that 90.1% (according to github) of the language is written in Elixir and the excellent macros system is a major reason for this.

I actually think baking in the ability to add language constructs with macros is exceptionally useful and allows concepts to be expressed very clearly. A great example of this is Ecto which allows you to generate SQL directly from within the language.

    from p in Friends.Person, where: p.last_name == "Smith"
Will produce a select query at compile time, no ORM in sight taking up resources and getting in-between you and the database.


> After using Elixir I love that 90.1% (according to github) of the language is written in Elixir and the excellent macros system is a major reason for this.

It probably helps rather a lot that Elixir leaves all the low-level bits to Erlang, and if you check both elixir-lang/elixir and erlang/otp… Elixir is a minor player (according to tokei there are 1417963 lines of Erlang, 306349 lines of C — headers excluded — and 120090 lines of Elixir across the current masters of both repositories)


Clojure had this same property, and it seems like a benefit at first: because new "language features" aren't tied to releases of the language, anyone can create them and instantly share them with the community for everyone to immediately start using.

A great example is destructuring, which someone wrote a macro for, around version 1.2 or so, that got bundled with the language shortly after. You can't do that without changing the language, or being able to extend it with macros.

But in practice, having community-written macros means you'll get several versions of the same one, and there often won't be any clear winner since each will have strengths and weaknesses, and at least some authors will be unwilling to merge or add features or change their version, so that you end up with a bunch of almost-perfect macros and have to settle for their annoying bugs that you know you could fix in 5 minutes if you just made a private fork, but then you end up in the xkcd-standards paradox.

Babel showed that you can successfully achieve the same thing macros do, but outside of the language, by writing a compiler and just targets the language you want to execute. No need for macros to get destructuring, just enable the "destructuring" plugin.

Unfortunately, Babel misses the mark on this dream, because literally every Babel "plugin" is actually implemented in Babel's core library and all the plugins do is just enable a flag or two in the core parser/compiler. It doesn't expose any parser or compiler for you, and you can't actually add new tokens without forking Babel.


Changing a macro you own > forking a macro library > forking a transpiler > forking the language.

I admit I never worked with Babel on the plugin-writing side, but that sounds much more complex than writing macros.

With Clojure macros, and Lisp macros in general, one of the important part is that they run within the same system as the rest of your code. That is, macro logic can be split out to regular functions, that will be executed at compile-time. Macros can reuse functions you wrote to be used at runtime. This property makes a language and your codebase much more coherent wrt. macro use.


How is this any different than any other library?

Basically, you shouldn't pull in a dependency that permeates your entire codebase. That it is a macro based library shouldn't change this. Right?

Language features often blur this, because people attach an unreasonably high level of trust on them not changing. Used to be, this was completely warranted. Truth to tell, probably mostly warranted even today.

However, this does mean you go at a much slower pace pulling in those features. And, the more any of these permeate what you are doing, the less you are in control of what it is you are doing. Probably fine for many cases, but this is the definition of technical debt. You are taking out a loan on someone else's technical asset (code) to accomplish your goal.

I liken this to being a kitchen builder. If you are just trying to build a few quick kitchen's in standard layouts, Ikea/HomeDepot/Lowes all have fabricated cabinets that are actually quite serviceable and likely build what you want. As soon as you are getting non-standard, those will start to cause some grief, and having not built up experience with custom cabinets is likely to be a source of trouble.


Would you be willing to provide an example of the "multiple incompatible implementations of the same macro" situation happening in the real world? I've been writing (modest) macros in CL for several years now, and never seen this occur - I've only seen "upgrading" where macro authors layer functionality onto existing macros in such a way that is mostly compatible with the original (such as sjl's extension of WHEN-LET/IF-LET to WHEN-LET* / IF-LET* [1], which I believe are compatible with the Alexandria implementations of the former), although it's quite likely that I simply don't have the experience to have encountered this situation.

[1] http://stevelosh.com/blog/2018/07/fun-with-macros-if-let/


I don’t see how this means Babel is better than macros, except that it’s harder to write syntax transformations if you have to “write a compiler”, so fewer people do it.

Is your point that you shouldn’t use new syntax until it gets into the ES standard?


In this example:

  macro swap {
    rule { ($x, $y) } => {
      var tmp = $x;
      $x = $y;
      $y = tmp;
    }
  }

  var foo = 5;
  var tmp = 6;
  swap(foo, tmp);
the article says it expands to this:

  var foo = 5;
  var tmp$1 = 6;
  var tmp$2 = foo;
  foo = tmp$1;
  tmp$1 = tmp$2;
Is that really true? The statement "var tmp = 6" appears in the source code before the place the macro is invoked. Can a sweet macro reach back and modify statements that occurred earlier in the file?


Macros cannot reach back like that, no. The renaming of the variable is due to sweet's hygiene system that operates on the program as a whole - if two variables with the same name exist in the same scope it will rename them to ensure uniqueness.


But then shouldn't the rewriting be like this?

    var foo = 5;
    var tmp = 6;
    var tmp$1 = foo;
    foo = tmp;
    tmp = tmp$1;


It could rewrite it like that, yeah, but sweet renaming all collided variables, likely to make it clear that they collided. It doesn't try to keep the distinction between what a macro introduced and what was already in scope (not worth the complexity).

This was back in 2014 though and sweet has changed a ton since then, so the output is probably different


> so they (C macros) are pointless except for a few trivial things

Oh, believe me, they are NOT.


Right? This seems a little obnoxious. I could totally see doing all of the things he talked about with C macros.

But whatever.



Since C macros are glorified copy/paste, you can do anything with C macros... which was the point, I think.


??? Not sure what you're getting at. You can't actually use the language itself from inside a C macro, like you could a Lisp.


Please no, I'd much prefer the logic of taking nice things like types and new features and having them implemented by a central team that knows what they're doing as opposed to being implemented on a per-team/project basis by people have no idea what they they are doing.


Do you let the people who have "no idea what they are doing" write functions too? Put code into production? You pay them right? You train them?

Why not teach them about macros and have code reviews on things they submit, just like you would normally?


This was written in 2014. What's the js macro scene look like today? Has Sweet grown in usage/utility?

Also, does this macro system have a concept of quoting/unquoting?


Worth noting that sweet.js mostly fell by the sidelines after Babel came along.

With babel-macros, one can write macros that look exactly like function calls. Or, more accurately, one can make any valid syntactical construct expand as a macro. Webpack and friends also do this to an extent with loaders.

In the current Javascript landscape, I don't generally see macros being used for syntactical sugar as much as they are used for code generation: things like "take this SVG file and generate a React component that renders it" or "generate the boilerplate to inject this stylesheet that I added via a ES6 import declaration".


I remember when Sweet.js was first announced, I was excited by it so I started playing around.

I still like it a lot as an idea, but maybe not in the way it was originally intended. In the last four years people have built parser libraries for JS (of varying kinds), and of course you have what became Babel. And Webpack. You might as well see them as way of plugging in features into the language you're writing and taking macros to an extreme, because it's not pure JS any more.

Yet most of that could be represented as macro definitions if you were to put the work into it. So I actually like the idea of Sweet.js as a really simple way to experiment with JS syntax without investing in parsing a language from scratch, or trying to plug it into Babel.

In the context of JS itself though, it's not a solution I would promote as sustainable. As soon as you see the idea of macros you start to want to think of everything as a macro ("because it produces more optimal code!" or whatever) and then you get a codebase that cannot possibly be maintained, because changing one line of code in your macro isn't always the same as changing the same line inside a function.


I have never used macros. Is it really that much productive and readable? Versus proper compiler features (async, =>, type system) + a proper code design.


> Is it really that much productive and readable?

It can be. Used judiciously, they can reduce boilerplate and help highlight the important parts of the codebase.

The problem is that they're catnip to a certain type of developer. If they're allowed to proliferate in an uncontrolled way, you can easily end up with unmaintainable code.

Worst case, you'll end up with someone implementing a half-arsed type system at which point you may as well delete the repository (ok, slight exaggeration).

I can't really comment on whether or not JS would be a better language with a good macro system. My guess is probably not but I have no evidence for that.


Emacs is a very important ecosystem that effectively uses macros. The danger of macro overcomplication is real, and emacs' conventions are the primary way they avoid introducing problems.

For example, macros and functions are nearly the same thing: one runs at compile time, the other at runtime. But they both have documentation, and you can jump to a macro's definition the same way you would for a function. You can also compose them together and generate new ones on the fly, etc.

Basically, a lot of the problems with macros in other languages is that macros are considered this special type of not-really-a-function. It's either templating sugarcoat, or restricted turing-noncomplete layer. But if you treat macros as literally functions that just happen to run at compile time, the problems are reduced. And if you treat them with discipline and document what they do, and show examples, and add test cases, then you never have to worry -- just like normal software.

One helpful rule of thumb: If you copy some functionality more than twice, it's probably time to make it a function. If you define a set of functions repeatedly -- like test cases -- or find yourself writing the same pattern over and over, or if you can generate most of your structure automatically based on e.g. "I know this is a React component", it might be time to write a macro.

If this concept sounds intriguing to anyone, I highly encourage you to check out Lumen: https://github.com/sctb/lumen

It's a Lisp that's literally javascript:

Compiled code: https://github.com/sctb/lumen/blob/master/bin/reader.js

Original code: https://github.com/sctb/lumen/blob/master/reader.l

So if you understand Javascript, then there's not much to learn. And you can use it in your own projects right away.

Most of Lumen's brevity is thanks to macros.

(One hack for quickly understanding Lumen: read the tests. Every language feature is meticulously demo'd: https://github.com/sctb/lumen/blob/master/test.l)


As a OO programmer, I would still argue that: if I define a set of functions repeatedly, I will try to find an elegant object design to abstract that. One good point, then, is that I can debug that object design at any line of code. Whereas (I feel that) macros are undebuggable by design.


I get where you're coming from, and it's something that's been debated at length. For everything-is-an-object languages like Smalltalk, the dynamic features seem to be enough to cover lots of problems that macros are used for (e.g. https://news.ycombinator.com/item?id=14333824 ). I hear that Ruby uses monkey-patching more than other languages, which may also be an alternative (although I think macros are better than monkey-patching, since they're local transformations rather than "spooky action-at-a-distance").

Simula-style languages like C++ and Java don't offer the same sorts of dynamic features.

> Whereas (I feel that) macros are undebuggable by design.

I'd be interested to know what you mean by this. I've written macros in Emacs Lisp and Racket, and both were pretty easy to debug. In particular the tooling for each (Emacs and DrRacket) can "expand macros" to show what code a particular macro usage will turn into (at which point, we're back to debugging non-macro code; although potentially messier than we'd right by hand). In that sense they're more debuggable than functions, since we can't calculate the result of a function until run time (which is why functions need test suites, to provide a run time that's not the production application).


Note that the Blub Paradox ( http://wiki.c2.com/?BlubParadox ) applies to this sort of thing, which is important to keep in mind.

Essentially it says that we understand the things we're used to using, but not those we're not. Hence we can appreciate, instinctively, the problems caused by languages which lack a thing we're used to (for example, the standard "Go needs generics" arguments). Yet it's harder for us to appreciate those same problems if we're not experienced with that feature (e.g. someone who's only ever programmed in Go).


Suppose you were forced to use a language with no objects, and no closures. Since you're an effective dev, you'd find ways to cope. But we invented object systems to do better work.

Closures were another step up the ladder of abstraction, and now it's hard to imagine not being able to use them.

Macros are no different. It seems worth being skeptical of the idea that it's better for a programmer to have less power. And Emacs Lisp shows macros can work at scale without causing friction.

To your point on debugging, it all depends on the language. Elisp has an excellent debugging facility: https://www.gnu.org/software/emacs/manual/html_node/elisp/Ed...

The key is to internalize that macros are no different from functions. If you know how to call a function, you know how to use macros. The only difference is that they return code rather than values.


This was kind of my original point though. If you do treat them as compile time functions i.e. for a bit of light code gen then they can be super useful.

But that's often not the case. In fact, OO is a nice example. It's probably less true now that OO seems to be a bit out of fashion, but the number of crappy, poorly documented OO systems I've seen in Lisps over the years is staggering.

It's not really a criticism of macros so much as an observation. It's also possible that the type of developer who is attracted to adding a bijou OO system into their application is the type of developer that is attracted to Lisps. It may well be the case that adding a macro system to JS wouldn't have the same attraction so wouldn't have the same downside.


> If you know how to call a function, you know how to use macros.

Wrong, there are many macros that are impossible to grok without reading the docs carefully, there are also scoping rules. They are nowhere near at the usability of functions, macros should be written judiciously. Most uses of macros can be eschewed by higher-order functions, unless you are making a language.


> I have never used macros. Is it really that much productive and readable?

Proper macros (Lisp, Scheme, Julia, etc.) are just as readable as functions, since they basically are functions that operate on ASTs.

> Versus proper compiler features (async, =>, type system) + a proper code design.

Macros are a proper compiler/interpreter feature, and enable proper code design.

Relying on the language in place of the facility to write reusable macros is like relying on stdlib in place of the facility to write reusable functions. Sure, ideally you want the common needs to be covered out of the box, but not everything is common enough needs where the solution space is well-enough explored for a standard solution to be the right choice even as a default.


Those compiler features were all implemented as macros at some point in the history of language design. The thing about macros is that they allow you to experiment with language design in the language you are working with (which is obviously much easier when your language is homoiconic).


No they aren't. A macro solution is always less readable and less performant than a proper compiler feature. But you can't get a compiler feature for everything. People have different needs for different tasks and doing all of it in the compiler is just not going to work. Macros allow programmers to implement features that would otherwise take years (if not decades) of standardization to be accepted in a language.


Something I have been experimenting with what I call "persistent snippets", over frustration with boilerplate. Basically when inserting a snippet into an editor it leaves behind a magic comment, that lists the parameters of the snippet, and wraps the snippet code in a //#region to enable folding in editors and languages that support that.

e.g.:

    import {insertSnippet} from "polish-and-release-this-someday"
    import {mstBoilerplate} from "./some-local-snippet-store"
    
    insertSnippet(mstBoilerplate, {name : "XCardModel"})
    //#region mstBoilerplate 20180911
    export interface XCardModelInstance extends Instance<typeof XCardModelImpl> {}
    export interface XCardModelCreation extends TypeHelp.Id<typeof XCardModelImpl["CreationType"]> {}
    export interface XCardModelSnapshot extends TypeHelp.Id<typeof XCardModelImpl["SnapshotType"]> {}
    export interface XCardModel extends TypeHelp.Id<typeof XCardModelImpl> {
      Type: XCardModelInstance
      CreationType: XCardModelCreation
      SnapshotType: XCardModelSnapshot
    }
    export const XCardModel: XCardModel = XCardModelImpl
    //#endregion mstBoilerplate 20180911
This means a simple editor extension and very simple command-line tools can be written to expand / refresh the "insertSnippet" lines as needed. mstBoilerplate, basically is a template literal string in typescript / javascript land but you can use whatever makes sense for your language. So this is useful for language that don't want to / can't support macros natively. You end up getting a lot of the boilerplate reduction power, and everything just works.

I feel for a lot of cases this works better than macros, the expanded code is checked into version control so there is nothing hidden going on, debugging works, if the code ends up needing to be specialized a bit, it's pretty easy to erase the insertSnippet line / comment lines and start modifying.


Interesting. I had the opposite idea a while back, inspired by a coworker who struggled greatly with indirection.

In my scenario I’d like to extend the code folding mechanism to show inherited methods and macro expansions inline, folded by default. Then you could drill into a five function and/or macro code flow, without ever leaving the current editor window, if the code has good cohesion.

And if it doesn’t, then this is more incentive to fix your code.


> when run through the sweet.js compiler.

So.. which one is it?


The title says writing compilers, not using compilers.

It's almost the case of "one compiler to rule them all", though IMO pervasive changes (like adding a type system) are better done at the compiler-level rather than the macro-level.


TypeScript it is, then.


Macros typically have limited scope compared to a compiler, and tend to operate locally.

Also, macros typically do not carry state between macro instantiations.

As such, with a macro it would be very hard to eliminate null from the language.

Therefore, we need compilers.


So CL macros often follow the pattern of several forms of processing before returning a quasiquoted expression using the results of those earlier forms. That is, some regular code lisp code, followed up by, I guess I'll call it, a DSL.

Is that really all that different from where much of mainstream javascript-land has ended up?

It might take a moment to familiarize oneself with macros or a DSL, but does that not almost precisely define react component definitions?

Component definitions execute some javascript then spit back JSX and these JSX components can be used in a DSL that doesn't follow javascript syntax?


Babel, the defacto JSX (and ES2015+ syntax) "compiler" has basically become a general purpose AST manipulation engine.

Pretty much everything it actually "does" for your code is implemented as a plugin, and writing new plugins is relatively trivial if you're familiar with the AST objects and concepts.

Call it macro or AST manipulation, the effect is the same; the language becomes practically infinitely malleable. The only real limits to what JS+Babel can express are those imposed by the host environment (aka DOM APIs, no manual memory mangement, etc.).

To specifically address your last question regarding JSX: Code such as `<MyComponent attribute={value}>Child Contents</MyComponent` becomes something like `createElement(MyComponent, { attribute: value}, ["Child Contents"])` (sorry, I'm going off of memory here).

The transformation is applied by Babel, so it's parsed from the file as a string, transformed into an AST (with the help of the JSX plugin), and re-written back out as vanilla JavaScript to be executed by your browser / server process.


It might take a moment to familiarize oneself with macros or a DSL, but does that not almost precisely define react component definitions?

Surprisingly, the answer seems to be no.

Experienced react devs rely heavily on metacomponents: components that take other components as input, and return a combined component as output. It's like oldschool template metaprogramming in C++, but you end up merely wanting to stab yourself in the eye with a fork rather than hurl yourself out the nearest office window.

A proper macro system would make this pattern unnecessary, because you would rely on macros to generate the specialized components. It would spit out react code that you'd otherwise have to write -- or that you'd have to write a metacomponent for.

Note that a metacomponent is not the same as a macro. It's similar, but it still operates at runtime.

Here's a rule of thumb. Can your macro system embed the contents of your /etc/passwd file as a string literal into your codebase? If the answer is no, then you're missing out on significant power.


Hopefully etc/password is a bad example? Security issues aside, I feel like reading files that are configuration in nature are best done "at runtime" not compile time. Can you imagine the code that loads a configuration file directly into the code and the ops team unable to reconfigure things? That would be the epitome of "well it worked in MY build".


Note that `/etc/passwd` contains no passwords or other security sensitive stuff. You might be thinking of `/etc/shadow`.

That is an example of something you might not actually want to do, though embedding a file from the code repository (e.g. as https://doc.rust-lang.org/std/macro.include_str.html does in rust) is a reasonable and common thing to want to do at build time.


I'd also like to reply to the other half of your comment:

> Can you imagine the code that loads a configuration file directly into the code and the ops team unable to reconfigure things?

Yes! That's what code is, actually! Code is stuff you put in a file which can't be configured easily, but is thus consistent between all deployments of the software.

It often actually makes it easier for a problem to be found and reproduced if configuration is baked into the binary vs varying based on the environment the code is run in. In a way, docker images are a means of statically compiling a bunch of random stuff (like /etc/passwd) into your application at build time to remove more environmental variables.

Really, loading a configuration file into the binary statically at build time isn't much different from having `const SOME_VARIABLE = "some value"` in your source code... the only difference is how you, the programmer, chose to represent that constant.

The line between "configuration" and "code constants" is very thin, so I don't think what you say is a sensible criticism.


> It would spit out react code that you'd otherwise have to write -- or that you'd have to write a metacomponent for.

Do you know if anyone is actually doing this?

edit: As it turns out, https://github.com/facebook/create-react-app/pull/3675, yes!



How do you define in what order macros are applied?


Dunno about sweet.js, but in Lisps, it's usually source order. E.g., read the next token, is it at the head of the list and names a macro, if so, call the macro function, and pass in the code as the param, take the output and replace the macro form with the expanded version, resume reading...


But there is Biwascheme.


Honestly, can we just stop writing java-script. Even better remove java-script from webpages. Oh wait that boat sailed a while ago.

That aside I don't see what's the problem is with cross compiling to an other language. Other than if your writing a compiler why not compile to machine code. Honestly, I think code generators are better than macros.


I enjoy writing JS, so no, I won't stop.


Seriously. The hate for JS is strong, but I don't really see why.

Use typescript if you want, but I don't really see the issue. There are funny things related to implicit type conversion and some rules produce funny and unintuitive outcomes. These conversion can be very handy on the other sides.

"Math.min() > Math.max()" and they are damn right that this is true!


> Seriously. The hate for PHP is strong, but I don't really see why.

> Just use PHP 7! There are funny things related to implicit type conversion and some rules produce funny and unintuitive outcomes. These conversion can be very handy on the other sides.

> "PHP_INT_MIN < PHP_INT_MAX" and they are damn right that this is true!

Feeling the perception difference?


I always felt the same about PHP.

It gets a lot of shit because some of the oldest parts have some weird function names or parameter order, but outside of that it's a fantastic language, especially if you stick with php ~5.5 and newer. It's fast, it has a nice package system, and the pepole behind the language are making it better, faster, and more feature complete every day, with standards bodies which includes the biggest players in the space.

IMO languages like Go could learn a thing or 2 from PHP about how to stop worrying about the "best possible choice" and just start giving the language the tools that developers can use to solve real problems.


The difference is that JS has to be backwards compatible in order to not break 50% of the internet. With PHP, it is theoretically possible to make a major version upgrade that fixes most if not all the issues without worrying about breaking backwards, since upgrading is opt-in.


Upgrading was opt-in with python as well, and look at where breaking changes got them... A decade in and there's still a massive rift in the python community over 2 vs 3.

PHP is backwards compatible almost to a fault, but that doesn't mean you can't use new libraries or frameworks that hide those warts away from you, and that's a MUCH better choice in my opinion than making a significant percentage of the internet incompatible with the latest PHP so that your function arguments for a handful of old function calls can all look the same...


"massive rift" is a bit of an exaggeration. Most Python libraries have Python 3 versions and it's not at all uncommon to see libraries that don't have support for Python 2 (see Django)


I don't think it is. Sure, it's not like they are 2 different languages, but it's something that all python developers will have to keep in mind when using libraries. Not to mention the headache of having both installed on many systems (what does `python` give me?), and there are still many things that don't have python3 versions still (node-gyp is the one that still bites me every day).

It's not the end of the world, and it is getting better every day, but having to spend the better part of a decade dealing with multiple "main" versions of a language is not something that I want any other languages I use repeating.

And if that means needing to lookup if the function is urlencode or url_encode, well i'm happy to pay the price.


I mean sure, PHP has a package manager and you can write software in it. But what compelling advantages does it have over it's competitors if you don't either have legacy code or are primarily a PHP developer?


It's been a few years since i worked with PHP daily, but off the top of my head:

* the shared nothing architecture is extremely easy to reason about (the world begins with a request, and ends once it's out)

* shared-nothing also means it scales stupidly simple. Need to handle more requests? spin up more servers, on an almost linear scale, until your DB can't keep up.

* it's pretty damn fast all things consitered

* the barrier to entry is a fraction as difficult as setting up a backend with java or go or python (although setting up a LAMP-ish stack on windows for development still kind of sucked when i last did it)

* It's probably only second to javascript when it comes to just sheer number and breath of 3rd party packages

* backwards-compatibility is a huge bonus. I can be fairly sure that code I write now is going to keep working with minimal changes for the next decade (for varying definitions of "minimal")

* developers that are comfortable with it are cheap and readily available (although this is changing for the worse lately)

* it's simple. For the most part you don't need to worry about concurrency, about async requests, parallelism, callbacks, etc... Scripts run from the top to the bottom, and they do that every request. Sure, this is a massive drawback as well if you need those features, but PHP's "make it work any way possible" kind of ethos means that there are ways around most of those limitations that work really well... once you stop vomiting over how hacky some of them are in theory.

It's not the most pretty, it's not the fastest, it's not the safest, and it's not the most "pure", but it is a glorious mutt that just keeps on trucking along, and I'm not ashamed to say that I really enjoy the language and I would gladly start a new project in it today if the opportunity arose.


>it's pretty damn fast all things consitered

This is my favourite part of PHP. It really is fast. There's no server running all the time like a node or python website, on each web request it has to start up again and rebuild its entire state.


The bigger benefit is that architecture is so simple to understand.

The world starts with a request and ends with the last byte being sent. No worrying about shared state, no worrying about multiple application servers, no worrying about crashes or blocking operations reducing performance for everyone, etc...

It's just so simple that it let's you just focus on building something without worrying about all of the "software engineering" work that is all too often self-inflicted complexity.


Despite personally enjoying JavaScript, I can see why someone might heavily dislike it. Like the fact that it is/has long been the only choice, and that so many of the problems with the modern web have come about through JS.


You might enjoy writing it, but does anyone else enjoy reading it? ;)


The web still needs interactivity. Whilst JS might not be the best thing, what would you recommend? Interactive UI isn't an easy beast to tackle regardless of language.


>The web still needs interactivity.

[citation needed]


How can you do online shopping if you can't interact with the web page? Everyone forced to send an email when they want to buy?


I'm not advocating it (nor am I against it for that matter) but generic shopping cart modules have for a long time supported adding to the cart via URLs. It's quite a practical design as you just store a shopping cart URL with your product catalog data/page, and was kind of the starting point for "web services" and the composable web (until SOAP and REST ruined it with dogma wrt what you should and shouldn't do).


We had online stores even before JavaScript existed. All you need are cookies and form submission. Actually, as mentioned in another post, you don't even really need cookies.


We had stores even before online stores existed. All you needed is to go there.


By using form submission like in the good old Web 1.0 days.


You don't need JavaScript for that.


Which method do you propose instead for people to run interactive applications?


Internet protocols and native applications.

Let the browser be a document viewer.

Failing that, finalize Web Assembly's design to the point that the browser is just yet another general purpose VM.

WebIDL then defines the APIs accessible to any language that gets compiled into Web Assembly.


> native application

My issue with those is, they are less sandboxed than JS is. Why would I want to put one-off things all over my disk and system? While I'm willing to install native applications from the package manager for big things I really need, I do not want to do so for simple one-off interactive things, so I really like that I can just do such things in the browser without installing them. Things like simple games, mathematical demonstrations, ...

I think the value of sandboxed self contained interactive apps that can do graphics, input, sound, that run from a link, is tremendous to humanity.

Of course on the opposite side, one problem is the usage of so much JS for simple articles of text and photos. Those would be more pleasant to read if they were just text and photos and nothing more.


Ever heard of UWP, iOS, Android, snap, Flatpak, QubeOS ....?


On Android/iOS you install apps, which is the same as what I meant above by installing.

QubeOS: heard of it. It's multiple isolated linuxes together essentially. Do you really think one wants to install software on QubeOS to view a mathematical demonstration or a simple game from the web? The issue is simply the effort of installing vs just opening link and viewing it.

I don't know UWP, snap or Flatpak. Do they allow you to open a link and run something interactive immediately without installation? If so, great.

We need something that allows you to open a link and have something interesting running immediately.

Again, it's sad that this same interactivity is also invading simple textual articles that should be just simple plain text, but that is a problem of those articles, not of the existance of JS.


You don't necessarily need to install apps on modern Android, ever heard of instant apps?


What do you find to be their advantage over JS web apps?


Performance, Java, Kotlin, C++


instant apps have to either be quickly installed, be a static document, or be a document with some scripting functionality.


You can use them after a couple of seconds as they stream into the phone in the background.


Android and iOS are barely sandboxed - at least not in the ways that matter. Apple users regularly bring up moderation as an advantage Apple has over Android - but the whole point of sandboxing is you shouldn't need moderation. The fact that the two platforms are arguing over who moderates better is proof positive that their sandboxes are sub-par.

Flatpak is a (promising) work in progress that is copying a lot from web permissions. I look forward to the point when it becomes ubiquitous and secure enough that I would feel comfortable downloading and installing a random, unverified piece of code off the internet from a sketchy looking site. Flatpak is native finally starting to catch up to the web in terms of user-accessible sandboxing. But it's not finished yet, and once it is finished it won't be cross-platform.

QubeOS is great, but nobody uses it because it's hard to use and cumbersome. Same with VMs. You can sandbox an app in a VM and be very happy about it, but your performance will be worse than it would be on the web. Nobody does it because it's not accessible.

Like it or not, the web is the best tradeoff we have right now between flexibility and security. For the most part, users don't need to be suspicious of random links. If someone shares a link on Facebook, you don't need to go look up reviews before you click on the site. It's not like native, you don't spend your entire browsing experience worried about malware.

To be sure there are problems we're still solving (user tracking, phishing, processor control), but native hasn't even gotten to the point of trying to solve those problems yet. Any app you download to your Android phone is going to be able to track you better than a website can - it's just that you'll be so worried about malware that you won't notice. Website operators are arguing about how to prevent phishing for full-screen apps. Linux, Windows, and Mac aren't thinking about stuff like a zone of death yet - a full-screen app in Linux can easily spoof your desktop. Heck, it doesn't even need to be full-screen, just copy the interface and icon of another app including its title-bar if you want to phish someone's email or banking client.

The simple test is this: suppose I paste a completely randomized link to an Android APK, and a completely randomized link to a website. You might have some reservations about clicking/installing them -- but, be honest, which one are you more frightened to interact with?

How many people here would feel comfortable installing a random program that was linked on "Show HN" if they couldn't get access to the source code or verification about what it did beyond 2 or 3 sentences? How many of those same people are fine clicking on a "Show HN" website? The difference is that for the most part web sandboxing actually works.


The fact that Google was forced to add Android support to Chrome OS to make it appealing outside US schools, and that Flutter was born of the Chrome team dropping the browser architecture shows how much the Web still has to improve.

As someone that develops both native and Web, I feel the pain of catching up with native.

Web Components are finally around the corner, pity it took 20 years to offer what is a mostly a commodity in native UI development since Visual Basic and VBX were designed.

https://amexio.tech/amexio-canvas

Regarding clicking links, those that install anything outside of the store don't have anyone to blame but themselves, and zero day exploits of browsers is a thing.


> those that install anything outside of the store don't have anyone to blame but themselves

If you have to worry about a moderator, it isn't sandboxed well enough. Platform moderators are a security failing - you bring them out when you haven't solved the underlying problem. A version of Android with acceptable sandboxing wouldn't need a Play Store. There'd be no downside to just running an APK.

I agree that both native and web applications have a point. But that is the point - both of them are filling in different gaps. In some cases, they're targeting mutually exclusive gaps -- people develop for native to get around browser restrictions. But getting rid of those restrictions makes it easier for native applications to phish and abuse users. And while efforts are being made to introduce sandboxes to native platforms, they are miles behind where they should be -- as evidenced by user's hesitance to venture outside of official sources for software, a problem that most users don't have on the web.

Because of those differing strengths and goals, it's very naive to say that Javascript on the web doesn't play an important, vital role in modern application development. No, it isn't a perfect sandbox. It's just way, way, better than everybody else's.

Maybe that will change in the future. But in the meantime, if you're a user, and you're paranoid about untrusted or proprietary code, you should be encouraging devs to develop web applications instead of native ones. If you're not worried about security, and instead your biggest thing is that you don't like the toolchain, or you don't think the app looks pretty because it doesn't use your GTK theme, then fine. Those are very different concerns. I don't think whether or not someone is fond of web components has anything to do with application security.


> native applications

That would require deploying on at least five wildly incompatible native platforms, and break when they change architecture.

The web is not so much an integration layer as the no-mans-land of the platform lockin wars; it's a space that doesn't "belong" to any one platform owner, which is both its great benefit and great disadvantage.


That problem has already been solved.

My C++, JavaScript, Python, Java, .NET code doesn't care which OS it runs on, and the parts that do are very small modules.


So none of it has any GUI?


Cross-platform GUI toolkits were a thing before the HTML-everything craze.


Yes, and they are always slightly second-class citizens, although QT comes closest. Using a cross-platform toolkit is inevitably not quite the same as a native app on every platform - because you're not targeting the native API but a limited subset via a shim layer.

If it was easy, we'd be seeing a lot more apps available with Windows/Android/Linux/iOS native ports.


As if HTML/CSS was the same as a native app on every platform.

It is easy, just not for those brought up coding for the browser.

Or do you want to imply Electron is more native than even Gtk+?!


They're still orders of magnitude more "native" that whatever you can make with HTML and Javascript.


Qt, wxWidgets, JavaFX, SWT, Xamarin, Unity, ....


> general purpose VM

With unblockable banners, uninspectable unmodifiable code and other garbage.

The current state of affairs is just perfect balance, webassembly can seriously hurt the user if it ever gets good-enough direct access to DOM.


I fail to see how that is any different from minified JS.


Chrome provides excellent facilities for unminifying and debugging scripts, it's not even close to bytecode


And WASM has a human readable text form that is just as readable as unminified js.


How so? Its code will still be closer to ASM instructions, minifed js is closer to decompiled Jar (haven't personally seen one, maybe I'm wrong here). It doesn't even mangle every identifier so every now and then you get set back on track of what the code does by the occasional Array#filter or something


There is this myth that only bytecodes are fully reversible to source code and ASM is safe.

It is only a matter of having the right tools at hand.

https://www.hopperapp.com/

https://www.hex-rays.com/products/ida/

Code might not be 1:1 mappable to original source code, but it is in any case reversible to an higher level description, specially nowadays that re-writable code is no longer allowed due to security exploits.

https://webassembly.studio/


WASM has (will have) exactly the same capabilities as JS (including asm.js that already has DOM access) has.


well, cue unblockable banners then...


> Honestly, can we just stop writing java-script. Even better remove java-script from webpages. > Other than if your writing a compiler why not compile to machine code

This is essentially the idea with WebAssembly. But, for web applications you need security and platform independence, so it's not actual machine code.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: