Hacker News new | past | comments | ask | show | jobs | submit login

Because unfortunately many, many libraries and templating engines rely on evaling code.

Generated code is code that's outside of the package.

There are also a few (keyword: FEW) valid reasons for using eval in situations where it's beneficial to pull updates and modules from a known and trusted location.

In those cases though, if you're not signing server side and validating signatures in the extension with a pre-shared cert, you've still got problems (single MITM attack can compromise your extension going forward).

Long story short, eval is still a very useful feature. It needs to be used properly, and it should be used rarely, but it's incredibly powerful, and lots of things you don't really think about suddenly stop working if it goes away.




> Long story short, eval is still a very useful feature.

And yet this is Mozilla's stance regarding extensions pulling code from outside the extension's own package[1]:

> extensions with 'unsafe-eval', 'unsafe-inline', remote script, or remote sources in their CSP are not allowed for extensions listed on addons.mozilla.org due to major security issues.

This is the sane stance in my opinion, with the best interests of users at heart.

I would like to be shown an actual, real case of why eval() would be impossible to avoid for an extension, not just a theorized one with no real sensible and convincing example. As much as I try, I can't come up with any such scenario.

[1] https://developer.mozilla.org/en-US/Add-ons/WebExtensions/ma...


You're really unable to come up with a single example of why treating data as code isn't useful?

I mostly agree with you, eval is dangerous and often improperly used, but it's ALSO an incredible tool.

The complete opposite of your argument is languages that are homoiconic(https://en.wikipedia.org/wiki/Homoiconicity): They're built in a way that explicitly allows manipulation, even of the language itself as a FEATURE.

They include languages like Lisp/Clojure/Julia.

Worse, many of the ways to "work" around the lack of eval just reduce the attack surface, or make the attack considerably less likely (obfuscation not security!).

Simple case: You need to apply certain policies to certain sites, those policies vary based on the browser in use, the country of origin of the user, and the country the site is hosted in.

The way that those policies vary... ALSO varies. As new legislation is passed, or corporate policies change, or certain countries become more or less stable.

A (completely valid) way to solve this problem would be to send both a policy engine and a policy set to the extension. The policy engine is eval'd and runs the policy set.

That allows real-time updates to the deployed extension's ability to parse new policies. Not just different data in the policies, mind you, but actual new policy capabilities.

Do you have to use eval? Nope, sure don't. You can generate a custom ebnf (hell, why not just use the ebnf for ecmascript), write you own freaking engine, and run it. All without eval. But you're still just running eval. You just get to name it nicely to something else, introduce an incredible amount of overhead to recreate a feature someone has already provided, and generally get less performant results.

Or, you can just fucking sign the payload with a pinned cert and verify on the client, and get the same guarantees for security you have with HTTPS to begin with.


"Treating code and data" doesn't specifically require eval. Lisp macros avoid eval. Dynamic extensions to a running Lisp program don't require eval, but rather load.


Sure, but you're still taking data from outside of your original package scope and running it or allowing it to be run later.

That's just useful. Javascript eval gets a bad rap (entirely in my own opinion) because

1. It's the de-facto language for the web, which both drastically increases the number of inexperienced developers, as well as the total attack surface.

2. Again, as a scripting language for browsers, parsing untrusted user input is a common need. Eval filled the need for parsing, but not the need for trust.

You can absolutely use eval in entirely safe ways. Most people don't, and doing so (especially in JS) is hard. That's because JS has made establishing trust hard. Juuuuust recently, with the introduction of some of the webcrypto work, and the rapid rise of ubiquitous https are we really getting to the point where doing so is really feasible at all.


Indeed they do. I can think much of the NPM ecosystem which assumes (incorrectly) that packages are unique, stable, and good. That whole situation a while back showed that not to be the case.

I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change. Primarily, immutability. I'm thinking of something like an IPFS repo which is "elsewhere", but still very much crawl-able, scannable for bad stuff (aside turing issues...), and can be shown to reproduce stuff if it is broken.

Also, using an immutable self-certifying system would also solve the second point, regarding the single MITM. The trust would be with the file/package, and not some ephemeral cert (whos trust is brokered from above).


>I'd be OK with grabbing code elsewhere, as long as I could guarantee that it would never change.

But there are lots of times when the final result is much more valuable when the code CAN change.

You just have to have trust.

1. Trust that the folks who can change the code aren't malicious.

2. Trust that the code you think you're running is really the code you're running.

Neither of those things are really too much of a stretch. And the services and capabilities they allow are very, very nice.

Hell, statistically speaking... we're both typing these comments in Google Chrome, a browser that auto-updates itself all the time.

but 1. we trust that Google won't suddenly become malicious and 2. we trust the mechanisms in place (https, cert pinning) to ensure the update is really the update Google sent.

In fact, this whole article actually boils down to a breakdown of trust: Turns out random extension devs aren't as trustworthy as we might like. They make mistakes and there's no safety net.


If it can't change, then what's the point of grabbing something elsewhere instead of just embedding it in the addon?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: