Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Enter your URL and view CVEs affecting your stack over last 6 months (secalerts.co)
144 points by GiulioS 12 days ago | hide | past | web | favorite | 48 comments

It’s a pretty poor implementation that is basically matching on the lowest common denominator, by platform rather than by library or framework. An ASP.NET website is fully independent of a WCF vulnerability. They can coexist but definitely don’t have to.

Additional suggestion: many times the home page is a link to many different technologies. Crawl all first-level directory indices to see different techs. E.g. we have a xenforo-powered forum at /forums, a WordPress blog at /blog, a custom ASP.NET CMS at /store, a .NET Core web app at /foo, etc.

The domain index for most companies past a certain age/size not dedicated solely to a single app effectively turns into a static html page.

My static site running on OpenBSD 6.5 httpd gets identified as Apache ¯\_(ツ)_/¯

Same here. It also thinks I'm running PGP 7.1.31 when in fact I'm running 7.1.32.

My Angular site is getting detected as Ruby on Rails :)

Creator here. We built this using Wappalyzer to detect the software given a URL and match it against our database of CVEs and thought it might be a fun little tool.

Is this released open-source ?

I pointed it at my lighttpd server and all I got was "cannot detect software" or so.

yep that's what I got. Tried on a couple of my web apps.

This is a nice addendum to the "Let Us Identify Your Stack" style web services tho I guess some of them might already provide this.

It does have the somewhat negative effect of making potentially vulnerable websites more visible to lower order hackers (I'm assuming more proficient ones have automated discovery tools like this anyway).

There are browser extensions that do this same thing. It's pretty trivial to do.

https://www.whatruns.com/ not so precise must say

It's always a possibility but this tool doesn't look for version numbers so it's then more time consuming to narrow down if it's vulnerable to something.

I tried entering a bunch of major websites. Looks like ibm.com is full of holes that need patching.

Looks like it's overloaded - HTTP/502 on the API here.

Thanks. Increased the thread size so hopefully that should save it from all these hugs.

Finally, a place that can gather IP addresses and associate them to specific security products to have them hacked later. Just what I've been waiting for.

Why do you think this is any easier than just scraping urls off the internet

I'm always surprised by the mindset of the person you're replying to.

I learned a few years ago from some DEFCON video[0] that someone had figured out a way to do a (basic) port scan of the whole internet in ~1 day (or something like that).

Thing is... it really shouldn't have been that surprising. Although network latency isn't getting that much better year by year (c = c), the amount of data you can process in bulk, correlate, etc. is ever-increasing.

[0] At least, I think that was the conference.

In fact, there's a tool that claims to scan the internet in 6 minutes: https://github.com/robertdavidgraham/masscan

To be fair anything that is sending like this is probably getting you network blocks:

"This program spews out packets very fast. On Windows, or from VMs, it can do 300,000 packets/second. On Linux (no virtualization) it'll do 1.6 million packets-per-second. That's fast enough to melt most networks."

That's probably the DEFCON talk the parent poster is talking about:

Massscanning the Internet - Defcon 22 (2014)


Yes, thank you, I think that was the one. Only missed by an order of magnitude or so.

Port scanning is a real and established thing that anyone who is even thinking of security has known about normally for decades, but port scans don't tell everyone what your whole stack consists of. Maybe you'd like to share why sharing your stack with everyone is a good idea? I'd really like to know. Thank you.

Do you think that fingerprinting is quantitatively different from port scanning? My main point was just that a port scan can immediately identify the 0.1% (or whatever) of the Internet that you're interested in and then try more "invasive" probing.

(I should add that I forgot to mention that IpV6 does make the whole "PortScan the Internet" business a tad more complicated, so that's than argument against me.)

Wait until you hear about shodan...

I've used Shodan. But seriously, please give me a list of real resources if you think you're up to the task.

We don't store anyones IP address and the results are only in memory cached.

It's not you, but the implicit idea of trust. How are three people I know nothing about going to give me better results than a known name? Best of luck establishing yourselves though.

Alternatively, secure your stack and don’t have it hacked later.

Those self-managing their machines and sites could doubt if a break change or update would cause downtime, LXD/Docker could simplify on that and reduce the risk to only containers.

Part of securing your stack is not sharing with everyone exactly what you have, or do you not study security at all?

Don't shoot the messenger.

Or...interrogate the messenger when he tells you something that not even he is supposed to know.

Is there an open source alternative that could be self-hosted and configured to run automated and periodical checks?

While not a web or automated option - if you want to run a quick crawl and scan on your apps you could try OWASP ZAP, it also has quite a few handy plugins - https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Proje...

Metasploit? You don't even need to host it (why are we so obsessed with making everything a website?)

Metasploit isn't the best choice for webapps, you probably want nikto or similar. Here's the owasp list: https://www.owasp.org/index.php/Category:Vulnerability_Scann...

The key part is automation, not a website. Make it part of your ci/cd pipeline.

Wasn’t able to detect what software my site use, my server name was disabled.

i don't need to provide my (potentially vulnerable) production URL to whoever-you-might-be in order to identify the last 6 months of vulnerabilities - I can just google for that.

Submitting your site to this is just asking for trouble.

I’d suggest that running a vulnerable production service is asking for trouble!

My web logs are full of automated scanners. Once when I ran a vulnerable version of Wordpress it got discovered and pwned very quickly. No need to enter the URL in any website ;)

This just seems like a mailing list for CVE alerts for popular software. If you put in HN, it'll say that it failed to detect the stack, and then ask you to choose your software and then enter your email to receive alerts.

It's kind of clever marketing, giving people a sense that they're going to get a security audit in exchange for an email address.

The first URL I entered (coop.co.uk) was actually pretty awesome, it detected Varnish and showed a critical CVE from last week. That’s cool.

I hope that if you subscribe, the site regularly rescans your stack and realised if it’s changed. Otherwise it’s just a mailing list subscription that becomes out of date and therefore not useful.

Anyone can (and people are) just scan the internet for hosts on port 80/443 and unless your site uses virtual hosts and has no HTTPS certificates issued to one of your domains, it's going to be discovered and probed exactly like this site does anyways. The difference is real adversaries are doing it without you knowing.

This is the textbook definition of security by obscurity.

It's not even that. There's no obscurity here. This is security by pretending.

Security by head in the sand. Can't be any vulnerablities if I don't know about them.

Then somebody else is going to do it for you.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact