Even when they perform their half-solution, he still evaluates it on its merits, and then suggests how they can do better. A model of professionalism in the face of an absurd situation.
(the default will uninstall Trend Micro Maximum).
It's like "They're using JS; how could it possibly be secure?!". That's actually highly ironic considering that JS is so far the only language that is secure enough to run universally in every browser on the planet.
People can't get around the fact that JS has evolved a LOT since it was launched and it still suffers a bad name.
The result is that your server-side JS is more prone to different kinds of security issues.
On the client side, in the browser, JS is sandboxed, so the language is almost irrelevant. If JS can't actually access the underlying system, no number of bugs make the code insecure.
All those are not a factor in this instance. It seems to me it's a result of calling exec() on unscrubbed user input, and this can be done in any language.
JS is not "the only language that is secure enough to run universally in every browser on the planet." In fact, JS has had many, many security issues in browsers.
Rather, JS is the only language nearly every browser on the planet has implemented in the browser (hopefully sandboxed!)
This is just not true.
Still, there isn't much faith I put in any endpoint security solutions. They are all terrible.
Bromium seems to be bucking the trend of traditional endpoint security but they have one of the worst sales / business dev programs I have ever seen. They should be much more ubiquitous than they are.
If they look for (and find) new jobs wouldn't that just mean the problem is diffused? Personally I would hope they learn from the experience of failing than to get punished for it.
trendmicro.com > About > "Smart, simple, security that fits"
And then second is, "a global leader in IT security" and "25 years of security expertise".
What a crock. I'm supposed to take the company's position statements and products seriously after reading this issue report? This is like finding a sponge in the body cavity of a patient. It's functionally malpractice. The CIO and CTO should be fired. The CEO probably should resign, what else is the purpose of a CEO other than to make sure the main things the company stands for are true, and actually ships products that demonstrate it stands for those things? If they don't resign the board needs to fire them.
How is it possible, that a company which describes itself in the terms it has , have not done a thorough code review of all products before making them public? That is implicit in their own description of what their business does.
I'm not even sure the worst parts of this particular product's flaws would have escaped cursory code review by someone who is actually a security expert. And if that's true, then selling this product as it was before patching, might be fraud.
I don't think throwing coders or designers into a pit when they make big errors does anything helpful. Most likely, Trend has a cultural problem. Big errors like this can be an aide that spurs corrections.
I'm really looking forward to the day where the tools are mature enough to make this an option.
A space elevator is simple. Building one is very much not trivial.
This is similar.
Packaging a server backend along with a minimal kernel and V8 VM isn't any more complicated than most of the build tools used today.
Here are the specifics:
When you cut 99% of the crap out of an OS, it becomes a lot easier to package/distribute.
Immutable operating systems aren't a new idea either. How you think a Linux LiveCD works? ChromeOS is basically an immutable OS with an added persistence layer.
There's a lot more work to be done before any of the Unikernel implementations (ie NodeOS isn't the only one) are ready for production.
With that said, for webservers that aren't required to persist any state locally, it makes sense to remove mutability -- and there fore OS-level security vulnerabilities -- as a concern. That way, devs have more time/resources to focus on app-level security.
Locating and regularly patching security vulnerabilities across thousands of components in a fully-featured monolithic operating system isn't. It's a potential disaster waiting to happen.
You don't need...
...a huge bundle of drivers when the OS will always run on a VM.
...extensive filesystem support when everything will be either transient or run directly from memory.
...multiple users when only one is required.
...OS-level sandboxing (ie kernel/user-space) when the VM already provides sandboxing.
...native POSIX tools when 'safe' alternatives can be run from the VM.
Despite the best intentions of developers and admins alike, the current approach to security is not working. Despite my own vigilance, I have personally had my sensitive information leaked by two separate multi-billion dollar organizations in the past year.
It's a simple fact that every feature added, increases the attack surface of the entire system. All I'm suggesting, is that it's not a bad idea to start looking to the alternatives that are becoming available.
(Whether or not anti-virus at all is effective is another debate entirely.)
Send an email to several people on the organization containing the offending JS that calls shell execution, this can have a huge impact.
"Security software" LOL
But yeah, an image will most likely do it.
> I happened to notice that the /api/showSB endpoint will spawn an ancient build of Chromium (version 41) with --disable-sandbox. To add insult to injury, they append "(Secure Browser)" to the UserAgent.
> I sent a mail saying "That is the most ridiculous thing I've ever seen".
This is indeed unbelievably ridiculous.
I think the Blaster Worm and other variants that took advantage of that excellent decision were far worse.
A FreeBSD 4.x system with a (modestly) stripped down kernel and running sshd was not only rock solid in 2003, but would probably be rock solid today.
Just to pick one random example.
Nothing was secure in 2003.
a) incredibly far fetched theoretical attacks that didn't work in almost 99% of live deployments
b) local privilege escalation that required a real unix login on the system to exploit
I think if you had left a FreeBSD 4.x system running on the public Internet all of these years you would have been untouched.
You don't even have to be a security specialist to know that there's something wrong with your argument, because you're talking about exactly the time period where OpenBSD --- which is more secure than FreeBSD --- comically started having to change its tagline from "no remote vulnerabilities in the default install" to "just one vulnerability in the default install" to "only two remote vulnerabilities in the default install for a heck of a long time".
Even OpenBSD concedes it wasn't secure in 2003!
* It comes with its own http server (apache, with a conf file mentioning NCSA (!))
* Their realtime kernel module barely compiles (on quite old kernel versions), has a disgusting code and Makefile and makes the computer slow or simply crashes when it kind of works.
* They ship their Antivirus with quite old libraries, some compiled more than 10 years ago, and some probably impacted by several CVEs (openssl < 1.0.0, quite old libxml).
* Their init scripts are an ugly thing written in perl lauching several services in one script.
* Their rpm packages are just mindfucking. You have one rpm package to install the software, and other rpm packages to patch it... WTF.
From a piece of software, running as root (or even worst, in kernel space), written in C and analyzing untrusted inputs by definition, it's a bit worrying to say the least.
Makes me wonder why any of them bother, except for the piles of cash they can make off of unscrupulous rubes in management who demand AV software across the entire environment.
Any Apache configuration file mentioning NCSA would be from the Apache 1.x lineage.
It intercepted the not-quite standard response header "Content-Encoding: ps-bzip2" to our windows client application, stripped off the header it didn't understand but, of course, didn't decompress the payload.
So our application has not seen a Content-Encoding header and thus tried to run with the presumed-non-compressed response - that went really well.
Since that day, our server uses a content-type header that contains the word bzip2 :(
The icing on the cake: The customer in question told me that their cousin is working at TrendMicro and that they are by far the best virus scanner out there and that this must clearly have been my fault.
I'm not surprised that they also get other stuff wrong.
However, I'm surprised at the level of incompetence shown here. This is a security product after all.
I think this is actually a feature in many different products from different vendors. If I recall correctly, ISA Server (since 2004!) and the like inspect HTTP and SMTP traffic and validate it for conformance to published standards. If a malformed SMTP message comes in, it discards it. This prevents your mail server from being exposed to malformed messages, which could lead to denial-of-service/remote-code-execution/maybe it will be fine who knows.
Also, if they didn't like my ps-bzip2 encoding, they could have also stripped it off the clients accept-encoding header, causing the server to not compress the response. But they left it there and just stripped off the content-encoding response header.
The series of blog posts linked below might interest you.
It is, but it sounds badly implemented as described here.
Stripping headers and sending on the rest of the message with no care for the content is bad design, you are essentially corrupting a request or response.
If there is a problem with a message discard or quarantine the whole thing and log the event (in the case of SMTP perhaps send the target a "we blocked this, check with the sender if you really need it" and/or the sender a bounce if you are reasonably sure the sources headers aren't faked) for future analysis.
This thing with so called security products should be regulated somehow. These sort of exploits should carry a huge fine from the FTC or something of the sort as this, and many other "security" products (I'm looking at you AVG) is blatant deception, if not the exact opposite.
> We need the Antivirus-TÜV now!
I can imagine, in future, that an OS becomes like a nation. There's a core, transparent "government" component of the system which takes care of all low-level resource managements and security (just like a real life government) and there's a proprietary app which has a secret component but has to play by rule (just like a real private company). All the government behaviors and its legislators (i.e. maintainers) are constantly monitored by its users and media, and they are accountable for their actions.
Yes, this is far from perfect, and while we still might have the NSA of a kernel and the Enron of 3rd party apps (whatever it means), I'd argue this is better than the current everything-is-proprietary model.
There are botnets in the millions running on supposedly "Norton-protected" PCs.
No one cares. It's not a problem the people in a position to do anything about even understand.
Having your server hacked is one thing. Making kitchen appliances burst into flames - which is not impossible with remote control - is something else entirely.
It's a national security issue. Imagine a state actor or a terrorist org deliberately launching a synchronised mass IoT attack against another country.
Imagine the attack is untraceable because it's routed through all the botnets out there.
It's great the NSA wants backdoors everywhere, because that will make attacks easier to trace - wont't it?
We'll want to document the successes and failures of security software companies, their auditors, and their auditors.
Then you can use the decryptString API to decrypt all the strings, and then
POST them somewhere else.
So this means, anyone on the internet can steal all of your passwords
completely silently, as well as execute arbitrary code with zero user
interaction. I really hope the gravity of this is clear to you, because I'm
astonished about this.
I wonder has much damage such a flaw has caused?
Read the marketing copy for it.
* that TM charges $15/year for any non-toy use of the software (that is, if you want to store more than four passwords)
* the language that describes the "Secure Browser" feature, which is really an ancient version of Chrome/Chromium that has sandboxing turned off.
"This is really a horrific medical safety problem. Hard to believe an oil that is made specifically to cure your ailments has non-foodsafe stuff in it!"
If the packaging of snake-oil tells you about its miraculous properties ... that is, believe it or not, not a reliable source of information.
http://decentsecurity.com/about/ (This entire website)
https://www.microsoft.com/emet (for Windows)
https://paragonie.com/blog/2015/06/guide-securing-your-busin... (basic web security advice for non-technical people)
You don't need AntiVirus.
One obvious difference is that the consequences of mistakes can be higher for STDs then for infected computers, but that's the analogy.
Sort of like the startup community?
Yes, unless you are so computer illiterate as to regularly download old viruses by accident.
> How did no one within the company catch it?
I doubt they do any reviews of their code beyond QA/"Does it work?"
> Aren't security products developed by at least somewhat security-minded people?
No. Pretty much the people who test for vulnerabilities [i.e. actual security-focused programmers] are more expensive than the average Windows desktop programmer.
Now just imagine what companies that process your credit cards look like...
Yes. There's hardly anything* local to a client worth paying money for that will make you more secure.
*--> hedging for a miraculous product I just don't know about
I can't think of any technological approach that would work in such an organization, short of completely redesigning end-user client systems.
Seriously the CIO should be fired for this. It's really that unacceptable. It's not just one attack surface exposed by this.
You know a company is managed ineffectively when you can't even deploy security patches without talking to stakeholders. In any reasonably-managed company, stakeholders would only be involved with the press release announcing the issue (if at all).
The problem isn't solved as soon as a patch is released. This is serious damage control. Handled poorly it blows up. A lone dev doesn't/shouldn't hold all that responsibility.
Where I live, "stakeholders" just means the people responsible for or affected by something. Being "in discussion with stakeholders" about a fix just means talking to everyone involved in getting it deployed.
Is this term used with other meanings?
While there are many oblivious developers, I highly doubt that incompetence was the root cause. I'd point to a lovely mix of a) deadlines; b) slow internal procedures to approve the usage of third-party libraries; and c) requirements being passed to developers without context. It's hell, what corporate environments can push us to do. I'd bet that most of the people who worked in big corps have their own stories about internal procedures making them do things they objectively knew were wrong.
But LastPass, Keypass, Chrome's password manager, Firefox's manager, and IE are audited all the time, with tons of exposé articles supposedly trying to inform us about how weak they are (but all these articles do is further clarity how strong they are, since they only find trivial issues, or they misunderstand a feature as a bug).
I cannot recall the last time any of these had what I would consider a REAL security bug.
All password managers store plain text passwords. That's literally a requirement for them to work at all.
Chrome encrypts the password in the SQLite database using Windows' CryptProtectData() API, and Firefox encrypts the passwords either using your master password, or if none is set then it encrypts but stores the encryption key in the key3.db.
> Don't rely on them to keep passwords safe.
You've presented no justification for that. If you're using a root compromised machine then no password manager is safe. If your machine is secure then your passwords are secure in both Chrome and Firefox, but more secure in Chrome.
I'm not sure this is what you mean to say, because, obviously, good password managers don't store passwords in cleartext.
So when people complain about password managers storing plain text (as opposed to hashing) they're barking up the wrong tree, it is a necessary evil.
You just want to see them encrypt those plain text passwords so that offline recovery is harder. That's what both Firefox's master password, CryptProtectData() for Chrome/IE, and the key-chain in OS X provide.
> Chrome encrypts the password in the SQLite database using Windows' CryptProtectData() API
If its encrypted, then its not plaintext. Its ciphertext. In infosec lingo plaintext specifically refers to the unencrypted and otherwise unaltered original information.
A couple of years ago, chrome joined safari in using the OSX KeyChain. (On Ubuntu, Chrome can also use the gnome-keyring)
You don't. Assume every online service you use subject to security holes. What defines a secure app from an unsecure one is revealed after a security incident. That's why I continue to use LastPass simply because they exemplified their security practices during their recent scare.
As a global leader in IT security, Trend Micro develops innovative security solutions that make the world safe for businesses and consumers to exchange digital information. With over 25 years of security expertise, we’re recognized as the market leader in server security, cloud security, and small business content security.
Trend Micro Inc. is a global security software company ....
Anyone know if this uses a non-unique key pair like the Lenovo one did?
In every case I've seen of Windows Defender or Smart Screen being disabled or out of date it almost always seems to be the fault of a "security product" the user was talked into buying (especially the Norton Insecurity Suite). Defender and Smart Screen together silently but capably do their job at handling the main issues for a not very technically inclined person's systems and I find the harder issue is convincing them not to install games from disreputable sources (random poker websites, the weird shadows of once sort of reputable places like RealArcade and WildTangent) that install irritating adware and occasionally spyware, short of "taking away the UAC keys" and forcing them to call me to type in an admin password to install software for them, which I don't have the time/inclination to do.
 If a not very technically inclined user complains they see too many UAC prompts they are probably doing something wrong and you should help them figure that out.
The very first person who suggested Node http server must be fired that very day for offensive incompetence. I could hardly imagine any more satirical example. Hypervisor in Java, perhaps.
But idiots are bullshitting another idiots (PHP, Mongo, Node, Hadoop - you name it) whose only concern is to convince those one step higher in the hierarchy that they are still worth keeping, so any bizzare mix of trending [among idiots] memes would do.
Hey, Bivis, Node is cool, huh huh. Single-threaded callback hell in a language with implicit coersion, without standard module system (leave alone versioning) as a yet another useless layer of complexity to utilize lots of man-hours for o e more year? Lol wut? Node is cool.
So, all this is rather normal.
Nice imaginative example - hypervisor in Java. I almost get a panic attack launching Eclipse and then having to wait for it to appear. Contrast that with Chrome/FF launching with umpteen no. of tabs or a SublimeText. Clearly lots of people routinely get it wrong, in technology choice.
I think this comment is a good example of "contempt culture," which we've probably all been guilty of, and which we should do less.
BTW, contempt to incompetence or corruption is natural and healthy emotion. It is what contempt has been evolved for.
I'm suspicious that it was an MBA who wanted to use cheaper JS developers on the project.
My favorite part is that they use the address pwm.trendmicro.com. (I had to finger peck that now, my muscle memory kept typing something much more fitting.)
Why are those APIs even there? A "retrieve all passwords in the clear" API? A "run browser insecurely" API?
Has anyone considered charging Trend Micro with reckless endangerment or material support of terrorism?
Whoever is in charge of security for that project must be pretty embarrassed (or the person doesn't exist)... Also no audit? cmoooon.
I have Avira Free installed, But only have the AV part and have disabled Web/Mail Protection. So I am hoping that Avira are trustworthy enough and don't push anything my way.
It consistently scored lower than most other products in detections/heuristics.
The only thing it has going for it is the light foot print.
Less broken software tends to solve this same problem using a browser extension with a whitelisted domain for access, but that has the disadvantage of requiring a browser extension for each browser, and doesn't fully protect against hostile networks. Including the "https://" in the whitelist would provide somewhat more security, especially with HSTS, a pinned certificate, and a carefully-audited single-purpose domain.
But an even better design would eliminate the entire concept of connecting back from the vendor website to the client software. Sometimes the right answer to "how do I" is "don't".
Sometimes the right answer to "how do I" is "don't".
TL;DR It's not how the data is sent but where it came from.
This is an awful vulnerability and their immediate attempts at mitigation are sad but I feel like the Open Source community could do itself a lot of favors by avoiding this kind of tone. Not everyone working as a programmer is as good as you. Not everyone working as a programmer is any good. If you care about security here, why not concentrate on educating them. It's pretty likely the devs at any consumer hardware company aren't world-beaters; if we wanted that we would be willing to pay more than $100 for a router. It's also likely any large company has a chain of command these things have to go down through and back up again and for every link in that chain there's a greater than 0 chance the person has no idea about proper security. You may feel the security flaw is an obvious red flag that should have peoples' hair on fire but not everyone understands what you do.
sure. But then don't work on a security software suite. Or have bosses who properly code-review the commits.
During the time this vulnerability existed, machines running TrendMicro were infinitely less secure than machines not running the security software. This is severely wrong and IMHO warrants the very harsh tone.
This wasn't some obscure buffer overflow, sandbox escape vulnerability. This was running an RPC server over HTTP allowing full remote code execution over JSONp (or an <img> tag for that matter).
To be accurate, that comment was from a security researcher at Google commenting on a closed-source software project. I agree that the tone of the comment was not beneficial, but I don't view it as being associated with or reflective of the open source community.
This is no small bug that was overlooked in a code review. The pure idea of several things in it is so crazy that even suggesting them is a massive red flag. Actually implementing it?!
If it goes through a chain of command, then ONE person realizing how bad this is should be enough.
(Also, what "Open Source community"?)
They are selling a security product. They are the ones that should already know these things.
2) If you are releasing a security product you don't understand, this is worse than not releasing anything. Normally I'm with you, but this is security. If you need to educate them, __they should not be releasing software__. This is downright irresponsible and harmful. It's like attaching velcro to a door, calling it a lock, and selling it as a competitor to ACTUAL LOCKSMITHS.
In a perfect world this would never happen...
However we do not live in that perfect world. Far from it.
The reality is all over the world and all of the time, software and a great many other things are designed and sold, and the person who 'knows how it all works' leaves.
New people are hired. Some are good, and some are not. People quit, get illnesses, even die, and life goes on.
When you solve ALL of those possible issues, then you can beat that drum all you want. Until then though, this comment is petty at best...
Would you say this in response to someone who harshly reprimanded the designer(s) and implementer(s) of the original Therac-25 control software?  If not, why not?
In the case of danger/damage to human lives a la direct physical injury, there should be a much higher standard.
It is why there exists murder in the first, second, or third degree; with very different punishments.
There is a 'minimum standard' that must always be followed.
I guess you will disagree, but to compare a programming design error in an antivirus product to the Therac incident falls outside normal deterministic logic, and would make actually my point. As someone who has made software for many years, it is not reasonable to expect a Therac or NASA level of diligence.
This is what courts have upheld as well.
Agreed. In some jurisdictions, people rely on security software to keep them from being identified, and then tortured and/or killed by their governments.
If a given piece of security software that claims to protect its users instead makes them substantially more vulnerable to attacks that would reveal information stored on their machines and/or permit the attacker to install arbitrary software of their own choosing, that is both a breach of trust and -in some jurisdictions- tantamount to handing that user over to the jurisdiction's Inquisitors.
> This is what courts have upheld as well.
Courts have repeatedly upheld that members of the American public don't have standing to challenge the NSA's dragnet domestic surveillance program. While the notion of standing has great value in helping to prevent groundless suits from wasting everyone's time and money,  it's pretty clear that the courts
* Are slow to adapt to rapid changes in the nature of the activities they're supposed to adjudicate
* _Often_ fail to be as infallible as they wish they were
While it may not be illegal to be an incompetent security software vendor in America, I -and many others  in the industry- think it's entirely reasonable to name, shame, and disparage companies that deem it acceptable to ship "security software" that contains vulnerabilities that anyone with a year of relevant experience  under their belt would be able to spot and fix.
> ...it is not reasonable to expect a Therac or NASA level of diligence.
While it would be ideal for security software companies to adopt avionics-software-style design and QA procedures, the errors found by Ormandy are things that would have been obvious to anyone with more than a year in the industry... It's obvious to anyone who reads that bug report that Trend Micro couldn't be arsed to do the industry-standard level of QA and have one of their mid-level guys spend a couple of hours reviewing this part of their consumer-level security software.
While Trend Micro might not be legally liable for it, that's still negligence.
 Yes. I'm very aware that court is expensive, slow, and often used as a cudgel against regular folks by people who have rather deep pockets. The requirement to prove standing likely prevents far more nuisance suits than it kills suits that should be heard and judged.
 Frankly, I hope that most folks in this business hold this opinion.
 In this case, web development experience.
Honestly, I'm impressed that Tavis keeps the tone as professional as he does. This vulnerability and the AVG one from last month (https://news.ycombinator.com/item?id=10803467) are both head-slappingly stupid, just entire features that are ill-conceived and poorly implemented, and from software companies that should know better.
I'm not saying we should tell off support people when this happen. But they have to show solidarity with customers whose security and privacy was exposed by accepting coarse language when it's there.
Horseshit. Having a publically-exposed HTTP endpoint that enables arbitrary code execution is so goddamn stupid that even an intern with room-temperature intelligence should've been like "Hey, isn't this a possible vulnerability?"
I'm all for the principle of charity, but if you are in a field where people are counting on you not to be a rank amateur then you better step up your game and not be surprised if they call you out on your incompetence.
I'll humor a kid who screws up carving a turkey--but if they claim to be a neurosurgeon, they damned well better have a steady hand.
Also how is an origin check a fix. Like, great, now only Trend Micro (and any impersonators via DNS poisoning or similar tricks presumably) can read all my passwords in plain text??
>It's also likely any large company has a chain of command these things have to go down through and back up again and for every link in that chain there's a greater than 0 chance the person has no idea about proper security. //
When the locks need changing in a school it doesn't matter that the headteacher has no idea how to fit a lock, nor what lock needs putting on which door; they have an employee who has some competency in the field (caretaker/janitor) sufficient to watch an outside expert (locksmith say) perform and certify the work. If the locksmith says "just paint the door with a brick pattern and don't bother with locks" then the janitor should realise that's not sufficient.
>It's pretty likely the devs at any consumer hardware company aren't world-beaters //
Trend Micro are a security software company, aren't they. From their website:
"As a global leader in IT security, Trend Micro develops innovative security solutions that make the world safe for businesses and consumers to exchange digital information. With over 25 years of security expertise, we’re recognized as the market leader in server security, cloud security, and small business content security."
Like come on. "Global leaders in IT security" can tell exposing remote code exploits across your entire install base is worth paying a not-rubbish programmer to get here ASAP and fix your product.
As someone who regularly discloses vulnerabilities in open source software, I respectfully disagree with this statement. You are, of course, welcome to have your own opinion.
The truth is that technology has gotten to the point where poor security is lethal. From vehicles to medical devices, poor security can now allow a bad actor to put others into harms way.
Now, mistakes happen. No one is perfect. But we should demand the adoption of systems that do their best to minimize the potential damage of human mistakes.
Now, yelling at the construction worker because the bridge was poorly engineered is not at all effective. But the problem here isn't the tone, only that it need to be directed higher in the organization.
This is worse since TM is an AV vendor and probably has this deployed across so many desktops.
Why is it visible already?
And I'd say they probably use C or C++ for most other things, which isn't exactly safer.
You'd have to restrict where images can come from (people wouldn't be able to use CDNs?), where scripts come from (a lot of sites load jQuery from Google), you couldn't allow JS redirects. You'd have to forbid JS from automatically submitting <forms>. CSS on odd domains. Webfonts. Videos, audio. I'm sure there are things I'm missing.
Back on the AJAX side of things, JS can make GET requests through AJAX. The server has to send back a special header for the script to get the response, but if it doesn't, the request is still executed. (The idea being that, since you can execute GET requests using any of the aforementioned methods, allowing JS to do it isn't going to allow anything new. GET also isn't supposed to have side effects like this buggy endpoint has … GET was an extremely poor choice here.)
More arbitrary requests are permitted, but only under the condition explicitly allows them with what is called a preflight request. (Note that GET isn't the only exception to preflighting, see .)
Still, I don't see why sites should ever be able to make any request to localhost.