Hacker News new | past | comments | ask | show | jobs | submit login
AvastSvc.exe contains a full, unsandboxed JavaScript/DOM implementation (github.com/taviso)
473 points by phoe-krk on March 11, 2020 | hide | past | favorite | 191 comments



I don't expect much from anti-malware companies, but this is one of those moments that made me absolutely dumbfounded that someone actually thought embedding an entire un-sandboxed JS engine with SYSTEM privileges was in any way a good idea. I actually had to get out of bed, open IDA, and start a Windows VM just to check that this wasn't some sort of elaborate hoax!

This isn't some MIDI parser logic, it's an entire JS interpreter that can parse DOM elements! How in Earth did this even get pushed out to a release? Did we learn nothing since the last time [1]?

1: https://bugs.chromium.org/p/project-zero/issues/detail?id=12...


Because low-effort JS developers are now everywhere. Just as JS should not be found in the server yet is now prevalent, JS is now finding its way into other places where it shouldn't be. You can't have an entire industry push this terrible ecosystem, then expect security companies to miss out on the fun. Locating and hiring C++ engineers at a scale is something that has become very, very difficult.


That is a really bad take.

Executing code written in any language -- dynamic, static, compiled, interpreted -- would be problematic here.

> That service loads the low level antivirus engine, and analyzes untrusted data received from sources like the filesystem minifilter or intercepted network traffic.

Forget JS. Do not load or execute code from untrusted sources in an unsandboxed environment with system permissions. This is about capabilities, not syntax. If your main takeaway is, "they should have used a C interpreter instead", then you have entirely missed the point.


I agree with you that no interpreter should be running there. It’s bad design.

But how many C/C++ engineers would think to design a system that runs a min interpreted code, vs JS ones? The take isn’t as bad as you think.


> But how many C/C++ engineers would think to design a system that runs a min interpreted code

Multiple people, in this very thread, including you[0]. And apparently at least one Avast engineer and their upper management.

I'll requote/paraphrase another commenter[1] down-thread: it wasn't JS devs who wrote a custom interpreter inside a privileged C/C++ program. It was a C/C++ developer who thought, "I can handle this."

It's very important when calling out security failings to point out the real failing. If people are reading this and trying to take away security advice, I don't want their takeaway to be, "so my custom LUA interpreter is fine."

[0]: https://news.ycombinator.com/item?id=22545385

[1]: https://news.ycombinator.com/item?id=22545945


Re-read the comment. GP is not saying to make an interpreter for C++, they are saying that there should be no interpreter. If the language is compiled there's no need for one. C++ can obviously be insecure, but the scale of a JS interpreter + the fact that it's meant for executing arbitrary code leads to a huge security flaw that isn't present in just a normal C++ app.


Then just say that there should be no interpreter. Don't confuse the issue by talking about memory safety or act like there are better/worse ways to do this.

> but the scale of a JS interpreter + the fact that it's meant for executing arbitrary code

"the fact that it's meant for executing arbitrary code" is the only part of that statement you need. Avoid executing arbitrary code in unsandboxed/unisolated environments, even in a normal C++ app, even if the code is compiled. The scale doesn't matter.


The scale is important. A JS interpreter is 10s of thousands of lines of code at best. Even if sandboxed, the likelihood that there's a way out of the sandbox massively increases as more code is added. The possibility of bugs in the interpreter is another issue. For example, 10k lines of C or C++ code means there's 10k extra lines where there could be a buffer overflow, segfault, or memory leak. And then multiply that by the number of times these lines are executed with unknown input (AKA every line of JS code).


The scale is only important if you trust yourself to build a secure interpreter in the first place. Caring about the complexity of the interpreter means you are relying on the interpreter to keep you safe. Do not do that thing! The interpreter is not your sandbox.

If you're following best practices and isolating the process, then the number of lines of code shouldn't matter for the actual security. You should assume that a custom-built interpreter designed to run malicious code always has bugs -- whether it's running JS, LUA, whatever -- so you should run that code in a separate, sandboxed process that doesn't have system access.

It's not that the scale doesn't make a difference in complexity, it's that (for the most part) if you find yourself at the point where you're asking questions about the scale, you have already seriously messed up, and you need to go back and rethink your design.

----

The business problem Avast was trying to solve was, "how do we tell whether or not a random Javascript file contains malware?" The answer they came up with was, "we'll run the file in a process with system-access and see what happens."

I'll ask the same question I asked the original commentor: what is a safe way to solve that business problem without process isolation? And if you are correctly isolating the untrusted code, then why does the complexity of the JS interpreter matter?


Compiling untrusted code isn’t going to save you.


No, it won't. But there's no need to compile untrusted code in this scenario. All the code that is run should be packaged in.


I think you've possibly misunderstood the actual situation. This is a virus scanner. Analyzing untrusted code is the point.

The JS interpreter isn't there to lay out an interface, it's there to help them understand untrusted code that they find on the filesystem.


Where did I talk about having interpreted code?


> Contrast this with tailor-made, slim and well tested C++ code.

Avast is (was) running untrusted code in their system, by design. If your intention is to say that they shouldn't run untrusted code, then just say that. Don't waste time talking about whether memory safety matters for untrusted code -- just avoid untrusted code.

A lean C++ solution to the design problem of "how do we run untrusted code" is no better than an interpreted solution. You're focusing on the implementation, not the core design, and it is the core design that's flawed.

The only result of bringing JS up in a conversation like this is going to be to make people doing equally unsafe things in/with other languages feel better about themselves.


Again, what are you talking about?

No interpreter code should be running there, C, C++ or JS. What I am saying is that this design is made by or for people who are seeking to run JS as part of the business logic, which is ridiculous. I haven't brought up "memory safety" at all. Perhaps read the thread before commenting?


> who are seeking to run JS as part of the business logic

You can, of course, safely use JS for business logic without executing code from unsafe sources, in the same way that you can safely use C#, LUA, Lisp, or any other language. But that aside, your criticism is completely irrelevant to the actual security flaw.

I don't see any indication Avast was trying to use JS for business logic in the first place. I'm sure they over-embed crap for other parts of their interface, but that's not what's happening here. Avast was not loading a custom interpreter to execute their own business logic, they were loading a custom interpreter to analyze user files as part of their virus scan.

> That service loads the low level antivirus engine, and analyzes untrusted data received from sources like the filesystem minifilter or intercepted network traffic.

> Despite being highly privileged and processing untrusted input by design, it is unsandboxed and has poor mitigation coverage.

It wasn't Avast's reliance on JS as a development tool that caused them to say, "maybe we should parse and run arbitrary files on the filesystem in a process with elevated permissions."

It's their business logic that's the problem. Avast's business logic is, "we want to execute untrusted source code to see if it contains viruses." It wasn't a JS engineer saying, "I can't do my job in C++". It was a C/C++ engineer saying, "I know how I can tell if this file is dangerous -- I'll run it in the main process to see what it does."

If you can tell me a safe way to accomplish that business logic in C/C++ or in any other language without process isolation or sandboxing, then I'll concede the point. But I'm pretty sure you can't.

As an aside, an interesting followup to this disclosure would be if someone tested whether or not Avast is also interpreting other languages like LUA that they regard as potentially dangerous. I wouldn't necessarily take it as a given that JS was the only language they were doing this with.


> What I am saying is that this design is made by or for people who are seeking to run JS as part of the business logic

Nope, you've fundamentally misunderstood the issue at hand. The JS is not Avast's business logic. Rather, the program is attempting to analyze and detect malware in JS that the user encounters, similar to how an AV engine might inspect Microsoft Office files for malware. "Low-effort JS developers" had nothing to do with this. "C/C++ engineers" (and/or those above them) made this decision.


> If your intention is to say that they shouldn't run untrusted code, then just say that.

Or maybe stop trying to "gotcha" other commenters by making assumptions about what they were trying to say.


I was testing a pre-release version of one of our products at work, and it was causing (inadvertently) massive slowdowns periodically due to a naive approach to scanning a system for installed applications.

So, I did what any sane devops engineer would do; I throttled the CPU use limit for its cgroup in the systemd service file. Now no more scans.

Except now the UI wouldn't load. Couldn't figure out why. Just an empty white window. Turns out, it's running a node.js server and the whole UI is rendered in HTML/CSS/JS, but because of that it was so non-performant that the UI would effectively not render at all if it couldn't slam your CPU.

I can't think of any native-code, native-widget, control-panel-type UI that would completely fail to render the entire window at all if limited to 10% CPU time, but hey, here we are.


> it was causing (inadvertently) massive slowdowns periodically due to a naive approach to scanning a system for installed applications.

> Turns out, it's running a node.js server and the whole UI is rendered in HTML/CSS/JS, but because of that it was so non-performant that the UI would effectively not render at all if it couldn't slam your CPU.

So was it the naive approach to system scanning, or the web-based UI that was the problem? Because there are some performant desktop applications using web rendering, like VS Code. Though I'm not sure how VSCode would behave if limited to 10% of CPU, because that's kind of a weird scenario.


> But how many C/C++ engineers would think to design a system that runs a min interpreted code,

This is essentially how antivirus software works. Every one of them packages an emulator to execute malicious binaries.

I'd say the number one thing stopping C++ devs from running eval'd C++ code is the lack of a std eval, and that's probably it.


Antivirus software normally matches code patterns to well-known pattern database. It does not investigate the code on the client machine. AV software houses run their own labs, where emulation is used to inspect suspected malicious code.


To my knowledge every single major AV packages a local emulator. We have long, long moved beyond a world where AV does basic pattern matching.

Frankly, I am far less concerned with the js interpreter than I am the rest of the codebase.

http://computervirus.uw.hu/ch11lev1sec4.html

https://www.blackhat.com/presentations/bh-europe-08/Feng-Xue...

http://joxeankoret.com/download/breaking_av_software_44con.p...


Given the prevalence of undefined behaviour, a C/C++ system should be assumed to contain arbitrary code execution vulnerabilities until proven otherwise. So in practice most C/C++ programs that process data can interpret code, even if they weren't intended to.


I don't know why people freak out so much about undefined behavior - yes it's not defined in the language standard and that's quite unfortunate, but it becomes defined as soon as you chose a compiler. And, careful work (and avoiding really hacky things) can let you easily write a C++ program that dodges undefined behavior if you're uncertain how stable your build chain is.

To be honest though, in the modern world, picking a stable compiler like GCC is a good enough choice for life - this isn't the 90s where you might have to dumpster dive to find copies of that specific borland compiler your company decided to tailor their code to.

(edit: All the above holds until you start making assumptions about uninitialized memory, at that point you're really in trouble and, honestly, C++ really should be better about preventing you from using dirty memory)


The behaviour of GCC is by no means clearly defined. Even taking it as given that any memory handling errors will result in arbitrary code execution (accessing uninitialised memory as you say, but also e.g. double free), there are other cases. GCC has been known to compile the addition of two integers into an arbitrary code execution. It has been known to compile code like:

    void doDangerousStuffIfAuthorized(AUTHORIZATION* authorizationPtr){
      AUTHORIZATION authorization = *authorizationPtr
      if(authorizationPtr == null || !isValid(authorization)) return;
      doDangerousStuff();
    }
into something that executes doDangerousStuff() when passed null. When users complain about such things, the answer is that the code was causing undefined behaviour and so what GCC is doing is correct according to the standard.


As much as I am tired of the crappy code produced by some JS developers this time they are innocent. If you had read the article you would have known that the JS code executed here is JS found on the Interent, not any JS written by Avast. The bugs are in Avast's C++ code (or possibly C).


The bugs aren't in the code, and this whole subthread begun from what LeoNatan25 wrote is a tangent. The bugs are in the design, of downloading programs from random untrusted anybodies on the World Wide Web and running them, indeed of downloading programs from random untrusted anybodies on the World Wide Web and running them with elevated privileges. In order to test whether they are malicious, no less.


> If you had read the article you would have known that the JS code executed here is JS found on the Interent, not any JS written by Avast.

I think this makes it worse.


Yes, but it also means that this is not implemented because of "low-effort JS developers".


I have read it. It's not clear what code runs inside the interpreter.

What reason is there to even have such an interpreter in a highly privileged process?


>What reason

Benchmarks. It's faster if you don't push all the scanned data through a process boundary.


If the interpreter was running code written by Avast then it wouldn't be a security issue. Having an interpreter running code you have written vs writing the code in C++ is not necessarily better or worse from a security point of view.


Highly disagree here. Javascript's DOM parsing functionality has but one purpose: presentation manipulation, i.e. rendering. Having something like that running as SYSTEM is a security issue in itself, regardless of where the code comes from.

FFS, even display drivers don't run with full system privileges anymore.


JS has no DOM API, browsers provide JS an API to use. Plus DOM had nothing to do with rendering, it's just tree manipulation APIs.


Generally the interpreter is probably better, once you have enough memory-managed code that it outweighs the number of vulnerabilities in your native code by virtue of its significantly lower bug rate.


I'd expect server-side JS code running on popular VM (v8, spidermonkey) to be safer than custom C++ (sandboxed vs running on the bare os).

And BTW, this is why WebAssembly runtime on the server is a big deal. Being able to painlessly run any untrusted code from "nonsafe language" in a sandboxed environment.

Also, if you read it correctly, Avast is running "wild" Javascript in a custom privileged VM (potentially written in C++)


Your expectation makes no sense. Popular JS VMs have huge attack surfaces, and are prime candidates for gray and black market vulnerability hunts. They are often not maintained, thus once a vulnerability is discovered, the entire app is compromised. In the case of a highly-privileged process, this can be catastrophic.

Contrast this with tailor-made, slim and well tested C++ code. And yes, I do expect security companies to have well-written and well-tested code.


> And yes, I do expect security companies to have well-written and well-tested code.

Your expectation makes no sense, given the vulnerabilities we've seen in AV software in the past decade.

If they insist that executing suspect JS is a good idea, they a) probably should use an established interpreter unless there's good reasons not to and b) not run it privileged.

EDIT: Avast appears to have deactivated this now: https://twitter.com/avast_antivirus/status/12376853435807539...


> Popular JS VMs have huge attack surfaces

No, not really? Depending on the browser they have generally have a small-to-medium attack surface. Yes, they can JIT, but often they can't do much else.

> and are prime candidates for gray and black market vulnerability hunts

Because they are remotely exploitable, nothing more.

> They are often not maintained

The world's deepest pockets and countless hours from the world's smartest minds go into maintaining them…

> once a vulnerability is discovered, the entire app is compromised

Not in modern browsers.

> In the case of a highly-privileged process

Oh good, so not the JavaScript process, right?


How many vulnerabilities have existed in Electron apps, sandboxing and all?

I meant maintained by app developers who include the runtimes, not the runtimes themselves.


> How many vulnerabilities have existed in Electron apps, sandboxing and all?

Significantly fewer than you'd find in a comparable C++ application, probably, and with much less effort put into securing things like "if I index into this array am I allowing for an arbitrary write primitive" and "can I safety use this object without giving an attacker code execution". Electron bugs tend to be of the sort like "oops, we can load a file from the filesystem because we forgot a string check", and C++ bugs are "that, but with the other things I just mentioned".


> probably

Based on what? On C++ you have complex systems with difficult code to get correctly. With Electron, you have terrible chat apps that take 1GB of memory to display a few chat bubbles that allow remote execution into machines running them.

The data to compare the two is just not there to assume anything like you just did. Meanwhile, electron apps have proven quite insecure, despite not being able to allow arbitrary write primitive by indexing into an array.


> Popular JS VMs...are often not maintained

The V8 engine, in 2020, is one of the most actively-maintained software projects of any kind. And (for better or worse) nearly everyone who needs a JS engine uses that one. This includes - among others - Chrome, NodeJS (which means Electron too), and now Edge. The only major outliers I can think of are JavaScriptCore (iOS/Safari) and SpiderMonkey (Firefox).

The sin committed by Avast was rolling their own version of something, as a less-than-massive-company, when the state-of-the-art implementation is OSS. That has nothing to do with JavaScript the language. You're commenting on things you clearly know nothing about.


Id say it makes a lot of sense. You're comparing a memory unsafe language with a safe one.


The runtimes are also written in “memory unsafe languages” (C++). The runtimes bring a whole lot more code than if you wrote own tailor made code, meant to do something specific, in “memory unsafe languages”.


Yes, but the runtimes usually have a large amount of time and security effort invested into them.


This is not correct in my experience. I think it's more apt to say that security effort has mostly been spent around sandboxing and related technologies which is really an admission that there is no way to secure the JS VMs in themselves. The best engineers in the world can't do it. Maybe that will change if they move to safer languages, but so far nobody has done that.

Therefore when you see an exposed unsandboxed VM, you instantly know it's critical issue.


Writing secure C++ is quite hard, even for the best engineers in the world. However, that absolutely does not mean that your handcrafted C++ code is more secure than JavaScript running a virtual machine that is written in C++. The sandbox exists as another layer of defense, not because the code is inherently more insecure. (Also, it's usually because JavaScript virtual machines evaluate untrusted input, which is something that has been shown to be notoriously difficult to secure against in general.)


Can I invoke my @pcwalton card here? :-)


Love when the C++ dev enters the debate and claims with a straight face that security vulnerabilities is problem in other languages.


It's not about C++, it's about selecting a more appropriate tool that JS. JS is often used only because the developer knows nothing else. What's even more ridiculous, often JS is not even the easiest route.

Serious question, what are reasons to use JS in non-web contexts, apart from developer familiarity?


It doesn't matter if it was a Delphi interpreter. Having an unsandboxed interpreter running unsigned code was a stupid move. That some C++ developer thought it wise to do this is perhaps part of the issue.

V8 on the server has a very nice eventloop that's very easy to leverage for high performance while avoiding horrifying overflow issues and fits well for a large majority of web request/response patterns while still offering significant developer speed.


To be fair, this is the case in just about everything. If you know one language or tool, most will use that tool to do what they need instead of learning something else that might or might not be used ever again.

Some even find it fun to bend something that isn't meant to be bent.


It's the only kind of JIT'd code you're allowed to run on iOS without going through Apple's approval process. Pebble apps use JS for any code they need to run on the phone (as opposed to the watch) for this reason.


(Technically there's also WebAssembly as of recently, but it's part of JavaScriptCore so this is somewhat pedantic.)


> @paraboul is on point: JS is often used only because the developer knows nothing else.

I never said that.


Apologies, I misinterpreted. Redacted


JavaScript is one of the fastest scripting languages, for one. It often has pretty decent bindings to native code as well.


JavaScript is the only thing that you can really run on any semi-modern device. TVs, phones, laptops, desktops, servers, the only thing you can expect to execute on all of them is JavaScript. If you write your core libraries in JavaScript, you'll have that much less to worry about re-implementing and maintaining in something else. You'll have flexibility to potentially execute the same code on either client or server, phone or desktop. There are situations where that's pretty useful.

More than that, at least last time I checked, V8 is really fast. It is many times faster than the usable Python implementations, or practically any other memory-managed runtime. Only luajit seemed to sit in the same ballpark when I pulled up the shoot-out a couple years back.

I personally hate all of these facts, but sometimes, they really do mean that prioritizing JavaScript, or at least something that compiles down to JavaScript, is the best choice.


> JavaScript is the only thing that you can really run on any semi-modern device.

Wait, hold on: you can usually run C on most devices.


Generally true, but we both know that it's not so simple, and I'm surprised HN took this bait so readily.

With JS, you have to worry way less about hardware-specific builds, platform-specific linking implications, differing system behavior and intrinsics, or any of the other substantial hangups that become relevant when you need to distribute a native application across a wide range of devices.

We don't need to repeat the rest of the thread where everyone hops in and says "tut tut, hypothetically, it would be possible for it to not be that way". We're talking about the way things actually are. In an ideal world, JS would've been out of the picture about 3 years after it was born. :)


The difference is that C needs to be compiled to run on anything, javascript does not.


Right, but JavaScript needs a runtime to work at all. And I don't think there was any requirement that the language couldn't be compiled?


It's not like you seriously want to run C without a runtime.


Many people do in fact do this, often for embedded systems. And most other systems happen to ship with a C runtime.


There are C interpreters, commercial and open source since the mid-90's, don't mix languages with implementations.


This is more like a lazy or deeply ignorant use of running a process as root and reflects on the recklessness of the system designer, and not on whatever that process happens to be.


> Because low-effort JS developers are now everywhere.

I take it you're not a big fan of JS? That's a lot like saying your not a fan of hammers. Maybe you aren't good at using them, maybe the noise scares you; maybe you think hammer wielders are all idiots and the only smart people are shovelers. It's a tool, it works better in some situation, worse in other.

Low effort <insert language here> developers are everywhere. Lol. Please just stop. Any language is a bad language if used poorly. Literally, JS is just as bad as C++ in the hands of the incompetent.


I don't think JS devs are the ones embedding JS interpreters into binaries.


Ever heard of Electron?


> Locating and hiring C++ engineers at a scale is something that has become very, very difficult.

AvastSvc.exe is not the place where you need programming 'at scale'.


The problem here is actually that the scanning engine is running as SYSTEM in the first place. Whether having a JS engine/emulator in there is a separate matter. As usual, "endpoint security software" is very poorly engineered. Keep in mind that this is a common pattern among vendors; though some are even worse (e.g. Symantec used to do this directly in kernel space).


I actually agree with your conclusions, however, if they had dropped privileges before running javascript it would be worlds and worlds better. Whoever bootstrapped the C parts should have known this.


In this case, the interpreter is included for analysis of JS code and was seemingly custom made for that purpose, not to leverage JS developers, so your point doesn't apply here specifically.


Part of it is that C++ is poorly taught in most universities. Even if you do get an education in C++, try using any of the modern features like smart pointers on your homework. Your teacher most likely will give you a poor grade.


> Just as JS should not be found in the server

Said who?

> You can't have an entire industry push this terrible ecosystem, then expect security companies to miss out on the fun.

I would expect most security experts to push you to use JavaScript instead of C++, since the former will protect you from a number of rather common security issues in the latter…

> Locating and hiring C++ engineers at a scale is something that has become very, very difficult.

Is it really that hard? Here, I can help: I know C++, and I'll be graduating soon. Hire me ;)


If you haven't spent substantial amounts of time (personal estimate > 10 years) working with C++ writing production code , you certainly don't know C++.

Even if you did all that, odds are that you still don't know C++.


Ok, so I lied, I don't really know C++ because nobody really knows everything about C++. But I have written production C++ code (some of which is used by most of the people here, including you…) so ¯\_(ツ)_/¯. Anyways, this is veering off-topic, so if you or someone else would like to discuss your hiring woes and/or would like to test whether I really know C++ my email's in my profile; I'd be happy to talk to you there.


> If you haven't spent substantial amounts of time (personal estimate > 10 years) working with C++ writing production code , you certainly don't know C++.

Yeah this is part of the reason why I wont even try to learn the language


Don't let them scare you away from learning it; you can absolutely write C++ to a useful capacity in much less time than that.


Ah yes, the "No True C++ Developer" argument


That’s scaremongering. You may not know all of the syntax, but you can certainly write proper C++ code.


Avast was founded in 1988. Now imagine what its codebase looks like.


From experience, large/old software companies have their own rules what C++ is acceptable. A developer will take some time to learn that, but will then be able to blend in. Again, saying someone cannot write C++ after 10 years experience is nonsense.


That's not at all what I wrote though.

You can very well write something without really knowing it. In fact it's more than obvious that the majority of code written these days falls under that category.

Programming is hard and it takes years if not decades to "know" how to do it to an extent that's not harmful. This also applies to learning the tools. Some tools are easier to learn than others. C++ is notoriously difficult to learn.


You can write in a language even if you don't know it fully. C++ is a difficult case, because you have a historically rich syntax with even more richer modern syntax. But that doesn't mean you cannot write in the language, it just means you will focus on a subset that you do know. This is true about any language. Do you know about far pointers in C? Do you know all the intricacies of the Swift or Kotlin syntaxes? That does not mean you can't write software in these languages, or that you don't know them.

JavaScript isn't free of this at all, it's also a very complex syntax, due to many years of lumping more and more features without any coherent design. The tooling around JavaScript is notoriously bad and broken. Having to rely on package for basic stdlib functionality, having to understand how nested dependencies can and will collide, etc.—all of that creates much higher cognitive load than having to use C Lion or Visual Studio (not Code) for C++ development.


Apparently Microsoft, Google and Apple see it otherwise, hence their initiatives to improve C++ static analysers, lock down the use of C and C++ on their platforms with focus on safer languages, and even start making use of hardware memory tagging.


Be sure to keep on complaining about Apple not allowing interpreters on iOS though


was it intentional that the bug you linked to is assigned and owned by this same dude?


Tavis Ormandy has been dismantling AV software, one after another, for some time now.


Psst…there's a Linux harness to load the DLL.


The real tragedy is that there are still "security" standards bodies mandating that AV software is installed on clients. If you want to be PCI-DSS compliant for example then you better install AV software.

Even worse, sometimes the requirements go even farther and require the AV software to be third-party - OS built-in solutions don't count.

At this point, the only safe solution is running proper whitelisting to make sure that only authorized binaries get to run and to keep those binaries up-to-date and, if possible, sanboxed.

If you have such a setup and then you have to install AV like this because of security-theatre, you're making your security much, much worse (because in order to run AV, you have to whitelist it and because AV is apparently written to quality standards that allow running arbitrary user-supplied JS as SYSTEM)


Right. AV software seems really poor quality.

I recently updated my Windows app. VirusTotal showed 14 AV products detected my program as malware, when it clearly wasn't. I tried to report the false positives, but the vendors had terrible and inconsistent ways to do this (or even none for some). Even Microsoft's website didn't work when I tried to upload the program ("upload failed - please try again later").

I searched on the internet, and at least I found a workaround which was to recompile a library I used. This fortunately reduced the false positives down to a few products I hadn't heard of.

At this point, it's very hard to trust AV products with anything.


Who defines what "AV" is? Isn't your whitelisting solution also an "antivirus"?


NIST actually defines most of these standards and definitions. "Antivirus Software" has two:

A program specifically designed to detect many forms of malware and prevent them from infecting computers, as well as cleaning computers that have already been infected.

A program that monitors a computer or network to identify all major types of malware and prevent or contain malware incidents.

[1]: https://csrc.nist.gov/glossary/term/Antivirus-Software


> as well as cleaning computers that have already been infected.

That's not a thing.


Well, all AV software seems to offer the option, but I don't know many folks who would trust it alone.


Pretty much every definition defines antivirus as software that inspects files and attempts to decide if they are malware or not (and then blocks access etc).


> (and then blocks access etc)

Not so much. I'm required to run AV software on production Linux systems. All it does is write opinions to a log.

Insofar as the only reason we run it is checkbox-compliance, I'm fine with it being useless - certainly not asking for some kernel module or something that could actually block access. But I do find it funny.


I managed to run sample app on linux box.

Observations:

- I am not sure this is 'full' javascript engine. I am thinking more inline with static-eval [0]

Example 1: Date.now() returns 'Exception: undefined'

Example 2: console.log([1, 2, 3].map(function(x) { return x; })) returns 'Exception: function(x) { return x; }'

- I couldn't manage to access DOM document.write("<h1 id='x'>html</h1><script>console.log(document.getElementById('x'));</script>");

returns empty

- After reversing DLL little bit, seems like it is used as unpacker ( also it has VBA support )

Still can be some vulnerabilities, but saying running full Javascript/DOM implementation is 100% wrong

[0] https://github.com/browserify/static-eval


It doesn't have a bunch of standard functions (or they're not hooked up correctly?) but it is very much evaluating input.


My first assumption on seeing Date.now() throw would be that it’s implementing an older standard. new Date().getTime() would be the ES3 equivalent.


Thanks for bringing some facts to the discussion.


For as long as I can recall, visusscanners are malware itself.


Hasn't always been like that however.

I think it's only became like that after the built-in malware detection of Windows 10 became good enough so the antivirus ventors started adding "features" to make themselves stand out and look good on Enterprise comparison charts.

Thinking back on the Windows XP days... You'd be in actual danger if you didn't use one.


Windows Defender also had the same defect [1]

[1] https://arstechnica.com/information-technology/2017/05/windo...


Sure, while they actually caught malware back in the XP days, they were also still malware and some probably spied on their customers even back then. So back then it was about picking your poison.


If spying on your customers is malware (I think it counts), then Windows 10 itself is malware.


Not disagreeing in the slightest. My home laptop (Inspiron touchscreen model) will be on Ubuntu when the HDD gets replaced, with little regret.


I'm a fan of Ubuntu too, but you should note that they don't have a perfect reputation here either- https://www.gnu.org/philosophy/ubuntu-spyware.en.html


I do remember that, and I admit I haven't researched current distributions yet, so I'm a few years out of date.


Anti-virus softwares are the Anti-vaxers of digital world. Falsely believing they are in the correct group and benefiting the society.


Antivirus software today exists only so that tech support scammers can install something on their victims' computers, and then point to it to prove they were not scamming anyone.


is ESET (nod32) still legit? that's the only one I've tended to trust



What exactly does a short list of CVEs prove? Is there any mainstream software that doesn't have at least a few published?


ESET has had high severity CVEs in a security product, including SYSTEM RCE and kernel RCE.

Agree that 90% of CVE's are meaningless but unless they've done a lot of sandboxing work in the meantime (guessing not, and up to them to show that) it's hard to trust.

Given the CVEs published, do you feel confident that if the product were robustly fuzzed / reversed+ tomorrow that there wouldn't be low hanging RCE? How safe do you feel running Windows with that product versus without? Personally I trust Microsoft's engineering / SDLC more than ESETs, maybe just me.


Windows Defender works reasonably well.

Symantec broke Chrome on multiple machines for me.


Windows Defender had exactly the same design, with a similar bug [1]

>The Google researchers found that MsMpEngine contains a component called NScript that analyses any filesystem or network activity that looks like JavaScript. NScript isn't sandboxed and runs at a very high privilege level, and it's used to evaluate untrusted code by default on almost every modern Windows system

Every antivirus is bad.

[1] https://arstechnica.com/information-technology/2017/05/windo...


> Last week, 3/4 @taviso reported a vulnerability to us in one of our emulators, which in theory could have been abused for RCE. On 3/9 he released a tool to simplify vuln. analysis in the emulator. Today, to protect our hundreds of millions of users, we disabled the emulator.

https://twitter.com/avast_antivirus/status/12376853435807539...


Very nice writeup!

It contains a link to Avast's Coordinated Vuln Disclosure site: https://www.avast.com/coordinated-vulnerability-disclosure and this has a link to Avast PGP key that's served via unencrypted HTTP: http://virfile.avast.com/viruslab/avast-bugs-pgp-key.txt Not only that, the key is a weak 1024 bit DSA key :(


I'm hoping this isn't a dumb question, but why does it matter that a public key is public-facing and unencrypted?


If someone intercepted the communication, they could swap the Avast key for their own, allowing them to decrypt your message.


What jurgemaister said. If you don't have another trust mechanism (like Web of Trust) to validate if this is a correct key then HTTPS gives at least some assurance that no intermediaries between you and avast changed the key material.


Someone [1] mentioned this hash d0e7e0e0287cd5a6ee36c74557ebf70f38235f90c8ce07c75d49721e379503aa [2] for Tavis to have a look at. Another AV company that ships a js interpreter?

1) https://mobile.twitter.com/buherator/status/1237115773409206...

2) https://www.virustotal.com/gui/file/d0e7e0e0287cd5a6ee36c745...


A search suggests it's Norton?


Has antivirus always been this sloppy? I recall a time in the 90s/early 2000s where the engineers working on these products were considered the best in the business.


The problem has gotten exponentially harder. AV used to be "look for this set of hashes" now it's "dynamically detect this behavior, and oh by the way everything is obfuscated script code not a binary".


I think part of the decline is just from its decline in popularity. Less money coming in means lower standards and, in some cases, less-scrupulous business models.

Have you seen cable TV in the past five years? It's gone so downhill. Half of the content is still in 480p, awkwardly stretched to wide-screen. Commercial breaks seem like they're twice as frequent as they used to be. Shows being aired have drifted even further into reruns-and-reality-TV territory. Even the content of the ads has shifted somewhat, from recognizable brands to injury attorneys who look like they filmed the ad themselves.

You can see the money being drained from the husk of an industry, and you can see the increased desperation as they try to scrape together what money they still can. The same thing is happening with antivirus software.


It seems like quality lowered over time for several reason. 1) free products became popular, reducing revenue, 2) increased automatic updates and better Windows safety models reduced need and 3) Windows defender became decent around Windows 10 so other offerings became redundant.


it's not just about windows defender becoming decent (although that's certainly a prerequisite). the motivations are more aligned in favor of users when OS providers are also the AV providers. an OS provider wants viruses to be eliminated, because they harm the OS' reputation. a third-party AV provider wants viruses to continue to be a problem, because consumers' fear of viruses drives their revenue. all other factors being equal, if i'm using a microsoft OS, i want a microsoft AV.


It seems like there are far more people working in the antivirus sector, writing all kinds of different software, so the quality of code has gone done. One example is AV companies writing things like browser plugins that hog memory, or make browsing unusable slow.

Another thing to think about is that tons of features that were never added before, are in the products now. Basically being pushed by management because more features = more sales, even if it ends up bloating the product. I remember a few years ago when we were acquired and had to switch AV vendors to McAfee, it made our oldest group of hardware basically unusable (and these were basically kiosk-type machines). The application we were using required IE so we couldn't replace them with Linux machines either.


They have been this bad since at least the early 2000s. Maybe they were good in the 90s, I was too young back then and did not know enough about computers.


No, back then, compute cycles were a scarce resource, so an antivirus had to be lightweight and targeted. They only started to get sloppy when processors started getting faster than real software had a use for.


Actually yes, in the (old) past they where not only "sloppy" but some (view?) where even malicious in _creating_ a virus which only theire program could handle (but which also didn't do much damage besides annoying the user).

Through instead of sloppy I would say it's often more on the line of misguided about what is secure and _overconfident_ about their own skill to write code without security vulnerabilities. And if the are no vulnerabilities no sandboxing is needed right (sarcasm).

Through there where and hopefully still are exceptions to this. But don't ask me which ones because I haven't been on Windows for a long time.


> Actually yes, in the (old) past they where not only "sloppy" but some (view?) where even malicious in _creating_ a virus which only theire program could handle (but which also didn't do much damage besides annoying the user).

Do you have any proof for that or you just like bullshtting people so you appear knowledgable?


> in the (old) past they where not only "sloppy" but some (view?) where even malicious in _creating_ a virus which only theire program could handle (but which also didn't do much damage besides annoying the user).

You know what, I don't even care if this is made up, it's a great story anyway. It would be a great plot for a movie.

Ivan Vladimirovich is a Soviet computer hacker who creates computer viruses on behalf of AntivirusCorp. One day, he gets a mission that will come to change his life forever...


AV was always a snakeoil business that always produced low quality products which worked against security by increasing the attack surface and offering low-barrier-to-entry ways to remote code execution.


I don’t have access to windows machine, but what does ‘full’ mean in the title? Like can it open outgoing network connections? Can load remote code?


That it uses a Javascript runtime to check for malicious Javascript code, which could be used to write malicious files which expect avast to scan it, then the anti-virus scans and if it thinks it's worth to interpret the Javascript, runs it to check what it does, if u escape that interpreter u can run code as basically the windows equilavent of root


But... javascript never allowed computer files to be read? Or are you saying that also include NodeJS? What exactly can javascript do? Fetch data from the internet?


The JavaScript interpreter runs with SYSTEM privileges. A bug allowing for code execution would inherit those and thus be able to do a number of malicious things.


Have a UAF in the DOM code which allows a shellcode payload to run with SYSTEM.


it's a valid question! From the github the author writes "Despite being highly privileged and processing untrusted input by design, it is unsandboxed and has poor mitigation coverage. Any vulnerabilities in this process are critical, and easily accessible to remote attackers."


I see strings for XMLHTTPRequest and this runs unsandboxed, so I'm not feeling too good about this.


The readme shows it runs as SYSTEM which means it can write to disk and make network calls, etc.


well. the process that contains the JS engine can. The JS engine itself seems to be limited to what JS engines in browsers can normally do, so no intentional file system access.

However as the README says, this is a custom built implementation, built by a company who believes running a JS engine with SYSTEM privileges is a good idea. This means that there are probably exploits available and those do get full access to the system as the highest privileged user.


And, let me guess - this JS scan is a part of their "web protection" stuff that runs on the websites you browse? Because that would mean attackers can drive-by exploit a lot of people with a bit of malicious script attached to an ad.


> That service loads the low level antivirus engine, and analyzes untrusted data received from sources like the filesystem minifilter or intercepted network traffic.


this can't be good for avast. I used them earlier but have stopped using them after seeing windows defender becoming better.


I’d assume whether this runs escalated or not is considered not relevant


Just wondering, Has it got anything to do with [1] Sciter, the UI Library many AV software are using?

[1] https://sciter.com


I wouldn't be surprised if Sciter comes bundled with a VM for running JS, but it's not explicit on their website. They do position themselves an an alternative to Electron. And it looks like every anti-virus package uses this product.

"In almost 10 years, Sciter UI engine has become the secret weapon of success for some of the most prominent antivirus products on the market: Norton Antivirus and Internet Security, Comodo Internet Security, ESET Antivirus, BitDefender Antivirus, and others. The use of HTML/CSS has allowed their UI to stay in touch with modern GUI trends throughout all these years, and will continue to well into the future.

Sciter Engine is a single, compact DLL of 5+ Mb in size. Application using it are 10+ times smaller than the ones built with Electron or Qt. And size of the distribution matters, one of main Sciter’s customers discovered “golden 40 seconds” rule: for the user, to buy a product, it should not take more than 40 seconds from the click on “download” button to the UI to appear on screen."

---

As mentioned by others, this would be separate from the JavaScript VM mentioned in the OP and would not run as a privileged account (it would just be the UI people interact with).


> one of main Sciter’s customers discovered “golden 40 seconds” rule: for the user, to buy a product, it should not take more than 40 seconds from the click on “download” button to the UI to appear on screen."

Is this another case of a metric becoming a target and thus no longer useful as a metric? The quality of software should be how well it performs its intended purpose, not by the conversion rate of the user funnel.


You can have the best performing software in the world and it is still worth exactly nothing if you can't sell. The reality is that cheaply developed software that sells well is usually good business.


Although one would not expect UI components in the privileged service component.


With my limited knowledge on the matter, NOD32 (ESET) has 3 processes : ekrn.exe (SYSTEM), eguiProxy.exe (User) and egui.exe (User).

egui.exe is responsible of the UI interacting with the user and seems to not be running at all if the user never brings up the UI.

Considering the memory usage of the UI (~25 MB), looks like they choose to run a very lightweight UI (tray icon only) with eguiProxy.exe (2 MB) then start egui.exe if the user brings up the UI.

To be noted: the memory usage of the UI is almost the same as ekrn.exe which I suppose is the AV engine (1)

[1] I don't run the full suite and some features have been disabled (SSL MITM, web inspection and email inspection)


sciter ships with their own TIscript engine, which it's a small subset of JavaScript mostly for interop with the application and manipulation of the sciter HTML renderer.

Seems the original thread is a javascript interpreter for sandboxing and analysing JavaScript in-engine.

UI process doesn't run as system either since there is no gui on that space. you'll find the sciter code in the user processes (tray and app).


Most third-party AV software today is useless adware/bloatware. On my Windows machine I rely on Windows Defender + MBAM, never had any malware infection.


Isn't Avast is the same software that auto-includes a signature inside all the mails you send? Without a browser extension.


Apparently Avast has now disabled the JavaScript interpreter for all of their users: https://twitter.com/taviso/status/1237745571009409029.



Unrelated: https://github.com/taviso/loadlibrary

It lets you call Windows DLL functions from Linux!


It's not unrelated. It's part of the source (as a submodule), and linked from README.


Maybe post unrelated links as their own HN stories?


Well, I don't really post Hacker News stories, and given that this was one of the first links on the page, I felt someone might find it interesting. (It also indirectly tells people that they can try the thing on Linux…)


Well, I hadn't seen that before and thought it's pretty cool, so I posted it in case others might also find it interesting: https://news.ycombinator.com/item?id=22544777


The other way to do this would be to compile a test harness using winelib, since winelib compiled applications call in to both native and Windows DLL code.

https://wiki.winehq.org/Winelib_User%27s_Guide

I've done this myself to exercise a COM DLL from Linux. I had to use Microsofts "OLE/COM Object Viewer" to extract the IDL from the DLL and then compile everything with MIDL and wine-g++, but it worked.


What can a unsandboxed javascript engine do? can you read files? read data from my computer's memory? Send my credit card info to the internet? Is this reading my chrome's profile data or running Node in any way? Access my camera/microphone/keyboard?

I fail to see how this is a problema and I've been programming with javascript for 8 years...


If there is no vulnerability in the javascript engine or any of the API they exposed to it in order to mockup a web browser then it can't do anything. The problem here is that that's not exactly a tiny attack surface, and modern web browsers implement defense in depth and sandboxing untrusted code for very good reasons. If there was some RCE for a part of this javascript engine, you're not just executing inside of some locked down environment, you're not just executing code as an untrusted user, you're executing code in a process with SYSTEM level permissions.

This is like putting a 2 meter wide thermal exhaust port on your death star. On the off chance someone manages to hit it, game over. This process runs untrusted code so if you can get a file opened on the target computer, or even just get the user to go to a malicious website, you can try to attack this. Once you get some payload running yeah you could use bog standard crimeware to sniff out any credit card details entered, export your saved passwords from your web browser, look for any wallet.dat equivalents and run a keylogger waiting for you to decrypt it, drop some ransomware on the system, etc. This gives full control over the system if you find an exploit to it.

But yeah, this isn't immediately a vulnerability, just a poor design decision and a very juicy target.


> This is like putting a 2 meter wide thermal exhaust port on your death star.

Except it's probably more like the second Death Star because it's unfinished and there's a gaping hole in the side of it that you can just fly in.



Chrome and Firefox have both continually developed a secure JS engine for over 10 years, and there's still vulnerabilities found in them all the time...

Now imagine a company's internal JS implementation without all that engineering effort... vulnerabilities will become apparent immediately


An engine like this would likely not implement the hardest bits - JIT and optimization. Many, if not most, vulnerabilities in modern JS engines arise from performance optimizations.


> What can a unsandboxed javascript engine do?

It can do everything that the user could do, and the user in this case is SYSTEM. I understand SYSTEM can do everything.

It could install a keylogger, for instance.


If the js engine has no bugs it cannot do anything harmful, but since this is Avast's own custom engine it likely has security bugs which allow malicious JS code to escape the engine, and if it does it can do virtually anything with your machine since the engine runs as SYSTEM.

Writing your own JS engine and then running it as SYSTEM is terrible engineering and very dangerous. There is no reason for why they have to run it as SYSTEM.


Has anyone confirmed it is their own? I'd be surprised if it wasn't a shaved-down v8 or similar.

Lastly have we confirmed there's no memory or thread protection in place? Most AV reputable AV companies have strong proprietary sandbox code (especially in the scan engine process) which is on par with virtualization in terms of isolation.


It doesn't look like V8.


This runs as SYSTEM, which is the highest privilege set in Windows. Yes, a vulnerability will result in absolutely everything you asked about.


To be picky, not the highest privilege, System is prevented from doing things that TrestedInstaller can do.

Hence the existence of a number of Windows tools to get TrustedInstaller privileges, some examples listed in this thread:

https://msfn.org/board/topic/181190-how-to-overwrite-dll-fil...


Using a third party antivirus solution in 2020 just sounds like a bad idea.


I am unable to replicate this on Windows 7 x64 (yes I know I should upgrade, but I am a crusty ol' XP user)


Yesterday on W10 I had to create registry keys to run an installer because the normal ways of unblocking the executable were simply missing off the menus. I miss Windows 7 every goddamned day.



Or you can right click on the file in explorer, choose properties, uncheck the checkbox at the bottom (I forget the text in it, sorry) then click OK.


That's the problem. The executable was missing the Unblock checkbox despite throwing an "app has been blocked for your protection" pop-up.


If I remember correctly there is a version of Windows 10 which has server restrictions on what you can do, including it fully blocking "untrusted" installs. In turn it's a bit cheaper. But I'm not sure what they named it, maybe that was related to it. Also it's not uncommon for Windows to be able use features your version didn't had through the command line.

Edit: sorry swipey keyboard messed up some words.


There's Windows 10 "in S mode" which does this, but it's not a version of Windows sold at a discount, just a "mode" you can run Windows in. It's possible to (irrevocably) change from S mode to normal Win10.


Typical modern application development. "Javascript all the the things" because "we can make good UIs!" with no regard for safety, speed, or reliability.


Security theater at work




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: