Hacker News new | comments | show | ask | jobs | submit login
A mysterious grey-hat is patching people's outdated MikroTik routers (zdnet.com)
397 points by wglb 64 days ago | hide | past | web | favorite | 210 comments



>But despite adjusting firewall settings for over 100,000 users, Alexey says that only 50 users reached out via Telegram. A few said "thanks," but most were outraged.

Have to wonder if those "outraged" users are ones who would have proactively fixed it themselves, or if they would've let their router happily continue to chug away as part of a botnet.


Every now and then, when I am bored, I reverse engineer some of my phishing emails (Linkedin message, Fedex parcel etc). Very often I find that the phisherperson has embedded a rogue document (often .php) in a legitimate server. Sometimes I send a polite email to the admins of these sites warning them about the injected file. I NEVER received a thank you from any of these people. I don't care - I am not doing it for thanks, but I do sometimes wonder what the internet has done to once-common human decency and politeness.


As someone with the authority and means to shut down domains for exactly this, the truth is, most people have either used email addresses they never check, or, just ignore all warnings. I'd argue >75% of people contacted never reply. Their entire domain gets shut down, and then, probably 75% of those do finally contact asking why their domain is down. It's probably most likely that since WHOIS data is public, people just don't put addresses they check often there.


>with the authority and means to shut down domains for exactly this

How do you get that authority to do that? What does "shut down" entail? Does that mean you can unregister or hijack domains? I'd like to know more about this, as well as the accountability process and where I can report abusive behavior that will actually get addressed.


Work for a registry with many TLDs. Shut down is removing NS records from the TLD zone. And yes, you can generally report this type of thing to a registry of a TLD. We often act quicker than many registrars, in my experience(though it depends on a TLDs owner/policies...some are much better wrt malicious activity than others).


I used to work downstream from groups like yours. In the data center, we would receive abuse reports and investigate. Look for 100% CPU use, or wide-open ports, or just visit the server on port 80 and see the phishing page. Then pull the nic and email the technical and administrative contacts.


They probably work for a registrar, and abuse contact information can typically be found in WHOIS data.


Hey just because it may be a fair time until I next get a chance to ask someone who may have the answer - I've wondered a fair bit do you guys get a lot of spam at the abuse@ for domains or do spam bots know to not bother with them?

No reason to ask at all honesty, it's just been one of those curios that pops up in mind occasionally


I work(ed) for a TLD registry, not a registrar, so our information isn't in WHOIS. As such, it gets little to no abuse. We do get email from the big names in security research/abuse tracking guys.


I'm currently dealing with this from jetigroup.?rg. The registry information is invalid, the contact emails I found for the guy who ran the company at one time bounce back, but a weak password on a mailman install let someone create a distribution list that allows every recipient to post. They broke the unsubscribe part of the script, so nobody can get off the list. Until I made a rule to kill all mail from the domain, the flood of "remove me, I'm contacting the secretary of state" messages were simultaneously hilarious, annoying, and sad.


domainabuse@tucows.com, apac-domain.manager@endurance.com, ipadmin@websitewelcome.com, qkhldjwp@whoisprivacyprotect.com, ipadmin@publicdomainregistry.com, abuse@publicdomainregistry.com, cpanel@webhostbox.net


Thanks. I've already emailed most of these, but will try the others.


try abuse@pir.org, also.

Typically a registry will just fwd it on to the registrar. That said, registrars tend to take complaints from the registry more seriously imo.


Can you shut down SMS? I recently have been bombarded with SMS from varying numbers, they all use the same $NAME


I once got a really good phishing email pretending to be ebay. It included my full legal name and my ebay id. It was some BS threatening to sue me for nonpayment if I didn't paypal them money or some stupid shit like that.

So I forwarded it to spoof@ebay.com with the message "reporting phishing email" or something. Somehow that report got "handled" by a clueless, non-technical, front line rep who thought I thought the email was real and was inquiring about the contents of the email. Pissed me off that the email wasn't handled by the correct department. I won't be bothering to forward any more phishing emails anymore.


I think you should have used abuse@ebay.com


No, eBay's website specifically says to forward phishing emails to spoof@ebay.com.


You're right: https://www.ebay.com/help/account/protecting-account/recogni... Thanks for the information.


How do they know that you can be trusted, and aren't just another spammer/phisher?

You and I can tell the difference, but to the sort of people who run vulnerable servers, perhaps a legitimate email about server security looks indistinguishable from the others ("Hi I'm from Microsoft technical support. Please let me in to your computer to help you fix it").


What I do in my emails is tell them the exact URL of the bad page. All they need to do is look at the file with a text editor (they are admins, after all). Once they have done this, they will see strange Javascript. They will know it has nothing to do with their own (or their clients) web pages. There are no links per se in my email (except the URL, but I leave off the http:).


Don't you worry that if you email spammy/virus-laden links then your email address could get flagged?


I've worked as a security analyst at a company and sometimes I would report phishing pages to the webhost. After a while, I realized that half of my emails were being silently quarantined by the company's outbound spam filters due to the included URLs. I was able to manually release them, but I wonder how many emails will then be flagged on the receiving end.


Typically when sending an email with content like that for a notification you'll "defang" the URL by rendering it like "hXXp:// foo (dot) bar (dot) com" or something along those lines to ensure that it isn't automatically flagged and filtered, though it's also common on the receiving end to apply no spam filters to their abuse@ email as well. You'll usually have better luck sending this information to the abuse email listed in the IP whois than to any contact information at the domain itself.


> you'll "defang" the URL

Wow, that's a phrase I haven't heard in ages.



Perhaps using a dedicated email for these sorts of reports could limit the damage if that were to happen. It would probably increase the chance of being flagged, though.


Most site admins aren't really aware/in-charge of the javascript on their pages.

Most webmaster@ or admin@ e-mails aren't monitored at all, or so flooded with spam that it's easy for things to get lost.


I don't think you need to trust the sender of the email. If someone emails me and there is a link to mywebsite.com and I click and it looks like the Google login page, I am going to be super alarmed and take the necessary action. Maybe they are out to get me (they hacked my website and put malware there), but if it's my own website that gets me, that's on me.


> How do they know that you can be trusted, and aren't just another spammer/phisher?

That's not it, not all of the time anyway.

Over the summer I discovered a third party mail server with a missing DNS entry. It was like that for months and all their mail was getting flagged as spam.

I sent them an email (from an account that wasn't flagging their mail as spam) pointing it out. They fixed it within 24 hours but I never got a single reply.


They wouldn't trust the sender they would assign a dev to inspect the referenced file, decision making would start at that point.


The dev got his $99 two years ago and had long since taken up other contracts.


> I NEVER received a thank you from any of these people.

Is it possible that they (perhaps mistakenly) believe that communicating with you could open them up to civil liability?


Or that your email is the actual attack they need to worry about.


> I'm a security researcher and I discovered that your server has been compromised. Click this legitimate link to learn more.


more like "click this legitimate link TO YOUR OWN SERVERS to learn more". Big difference :)


To you. They don't see it.


"Thanks for the info, I'll check this out :)"

There, nothing was admitted.


At least four things were admitted: that you received the information, somebody processed it, the approximate time you received/processed, and the intention to take action.

I would hope any well-intentioned and reputable company would not mind, but some might not want to admit any of that! Plenty of ammo for anyone who subsequently blames you if you then fail to remedy the situation in a timely fashion.


A reputable company that deserves it's reputation is probably not hosting phishers pages on their site. Sure, shit happens, but anything above a micro company that's hosting pages should catch that. The shared hosting company I used caught a breach on my personal page once, another time Google notified me: it's not rocket surgery to catch these things is it.

If the company is too small to monitor their own pages then I'd expect them not to be worried about this sort of liability (ie knowing of a breach, they're too small to be sued for much, presumably: if they were bigger they'd know about it already).


You just can’t tell, when your job is hosting user content, e.g. managed website hosting (cpanel) or static pages (Azure static website hosting). I mention these two companies because I received 2 phishing attempts this week, both pretending to be from Microsoft, with the payload hosted on cpanel and Azure respectively.

Both have an abuse / phishing declaration form online. I signaled both pages, and they are still up for the moment.


> Is it possible that they (perhaps mistakenly) believe that communicating with you could open them up to civil liability?

I'm curious, for vulnerability-by-inaction like this, would there be a legal difference if sent emails were posted to a public blockchain?

The intent being, if you're later sued for harm caused by your compromised hardware / IoT devices, you cannot claim ignorance as easily.

End goal, of course, being that people care more about patching their devices.


I doubt a court would make a distinction. The claimant (plaintiff) would have to prove the defendant knew about the emails being sent to the public blockchain and decided not to do anything about it.

That's not necessarily easy to prove, in the same way the defendant could claim emails were trapped in spam filters, etc. or more realistically, the burden of proof is on the claimant so the defendant wouldn't say anything if they're smart.


Ignorantia juris non excusat.


This is not about of ignorance of the law, so this doesn't apply here.


>I do sometimes wonder what the internet has done to once-common human decency and politeness.

Politeness essencially disappears once you can't see somebody's face. 10 minutes in any online game should be proof of concet enough.


Yeah, it's pretty bad. Some games try to gamify polite behavior (bonuses for getting tagged as helpful in a dungeon, etc), and that sort of works. Kind of sad that it's necessary, though.

I also think the rude behavior is a combination of both anonymity and "I'm never going to see or hear from this person again".


You wonder if it's the Internet or the desire to deflect liability. The insurance card in your car instructs you never to admit guilt, it's not a long stretch to assume the same factor is at work here.


That's great of you, I often do this as well and have never received any acknowledgement or thank you :(


To you and OP: Thank you for your efforts! Don't stop fighting the good fight!


Ridding the world of PHP, one page at a time :)

But seriously, thank you for taking the time to do this.


In the cases where the problem is fixed, I think you'll find that the explanation is as simple as this:

They view the message as showing up a failure on their part and they do not want anything showing that they have made a mistake in some way. So, they do not acknowledge your message as it provide a means of tracking that failure.

For those cases where it is not fixed, there is no-one who cares to do so.

In the past, I have made communications with website admins about various aspects of their sites (non-security related) when they poorly relate to those of us who are getting older and have increasing eyesight difficulties. The usual response has been "No one else has complained, so take a long jump off a short pier - our site is perfect." I sometimes try to explain that people won't continue to visit if the experience is bad, nor will they bother highlighting that there are problems. They will generally still respond with "shut-up and go away."

You just leave them to their ineffective site and move on. Very occasionally, you get back a thanks and see improvements made, but that is rare.


Generally speaking the standard is to not respond to email reporting malicious activity on a server, just to resolve the issue and carry on, particularly if the report is being read by an admin at a webhosting company. Doesn't mean the notification isn't appreciated!


I was once repeatedly getting phishing emails from a small fire company a few states away. It seemed that one individuals email there had been comprised. I called the fire company and asked to speak with him. He was super embarrassed and thanked me. Their IT person* there apparently didn’t know how to fix the issue, so I suggested a few possible solutions and we parted ways. I didn’t get any more emails from his address so maybe it worked ?

* their IT person I think was really just the person who was best with computers.


I'm on my company's many distribution email list for info@ support@ and other common addresses. I do the exact same thing, but I've received a number of email responses saying thanks or asking for the original email headers.

I usually only notify .edu or nonprofit organizations and completely ignore large corporations. Sending an email to a larger organization usually gets lost and nothing comes of it.


> I send a polite email to the admins of these sites warning them about the injected file. I NEVER received a thank you from any of these people.

Out of curiosity, do you receive answers at all?

If not, there could be a technical reason rather than the decline of human decency: your including the link to the phishing page gets the message filtered away by automated security software.


Do you ever follow-up in an isolated, safe, environment to determine if the file still exists? Curious to see if there is any correlation to the "admins" and the hackers themselves.


I do the same. Which reminds me -- when you get a 404 from utwente.nl you're redirected to a domain squatter / advertising site.

I can't be bothered to report it to them.


Sidenote: how does phishing via LinkedIn work? Recruiter spam?


I get an email claiming that I have unread Linkedin messages. The email uses their Logos. But if I were to click any of the links in the email, it would send me to a php or html file that contains a Javascript redirect script. That script, if executed, then goes to the phishers actual page. Sometimes, there is an additional DNS redirect at the JS redirected page. For some reason, the JS redirect tries to hide the redirect by encoding the target URL in an array of integers. The script converts the numbers to characters, concatenates them, and then sets the location property of the DOM. If there was enough interest, I could write all this stuff up as a blog.


The solution should be to just stop using human generated passwords and instead have each site generate their own and for browsers and apps use password managers built into the OS and offer to fill them in based on the domain. This is increasingly happening. We need the large sites to move to this to eliminate phishing entirely. So https://f00l.com isn’t same as https://fool.com


I use Keepass and still manually paste and autosave my passwords.

Have browsers extensions improved for this? When I last checked 5 and 10 years ago, it didn't seem to work.


For sites that I don't care that much about, I use and like Chrome's (relatively) new password manager [0].

I haven't looked into it in enough depth to be 100% convinced to trust it with my financially-linked passwords. (In reality, it's almost surely good enough, but I haven't reached that informed conclusion yet.)

[0] - https://www.blog.google/products/chrome/chrome-password-mana...


Oh yes.

1Password X for Firefox and Chrome, 1Password on Safari have pretty much solved this problem. The vast majority of passwords I fill are CMD + [shortcut].

Generating and saving new ones works like that maybe 50% of the time. The failure rate however is driven less by the extension technology and more by the password form itself.


All you've done there is move the attack target from the website to the users device.

The obvious solution is to remove all users from the internet.


Is this a joke?


Yes, that's exactly how U2F works (it's scoped to domain/origin). FIDO2 builds on top of that.


I would be interested!


Yeah. “Dear Alice, I saw your right-pad repository on Github and think you’d be a great fit for a job at Google. Can you fill out an application at g00gle.com/jobs? PS is $300K OK with you?”


Lawyers. Lawyers have done this and not the internet.


> "I added firewall rules that blocked access to the router from outside the local network," Alexey said.

This could very well be what's causing the outrage from operators... suddenly losing connection with your router that's in some data center 3 hours away - requiring a drive-over just to discover it's some dude adding rules to your production equipment would be upsetting.

There's legitimate reasons for remote operators to have remote access from outside the network. Obviously the router should be secured with latest updates that guard against known exploits, but this could be a major pain for some operators.

(you'd also have to roll back to some backup since there's no telling what else the guy changed, even if you feel he's more-or-less trustworthy... which means more downtime for your customers)


Apparently exactly this happened:

https://pikabu.ru/story/vzlom_routeros_5924128

assuming, at least, that the Google translation is decent: "It seems to be even good, but the admin’s account has severely cut the rights, the attackers created another one with full rights. The office is far away, the provider settings are pppoe, we can’t remove back-ups, we can also unload the config, advise how to be?"


But still, why would you upset? At the very least this guy has made you aware of a known security hole in your router. Sure, the timing maybe inconvenient, but at least you now know there is a problem you have to attend.

Would you rather leave the hold open and be happy in your ignorance if the security problem in your network?


It is trivial to secure a system by powering it off and disconnecting it from the Internet.

That also makes said system useless for getting work done.

Many people do not care about security at all; they just care that it "works". If we want the world to be more secure, the best (but hardest) way to make that happen is to make it cheaper/easier/faster to be secure than to be open (for example, Let's Encrypt).


yes, you can also dump toxic waste into the river, and that "works" just as well, but it makes you a bad citizen. of course, most people don't care about that, so it takes other people to force them to stop dumping.


Also, of course, only allowing remote access from a controlled list of source IP addresses.


Every now and then I let a software company that supplies enterprise software know that their marketing emails are going to spam because their marketing email provider doesn't have an spf record on their domain. They know me personally but still never reply.

I told the same company that the certificate had expired in one of their sub-domains. It intrigued me that the first 3 of their tech team didn't know what that meant.

Still they never said thank you.


...he’s the grey-hat they deserve, but not the one they need right now...


or the one they need but not the one they deserve :)


Security issues are tricky. Often making people aware of an issue is indistinguishable from having caused the issue.


See also: why many people don't go to the doctor.


Only if you're technologically-illiterate.


I've received many phishing emails claiming that my account has been hacked. So that is a case of someone "pointing out an attack" who is actually causing the attack. Even technologically knowledgeable people fall for phishing (I haven't though).


... which many people are.

What's your point?


Why would you hack someone and then tell them you did it (without asking for ransom or something)? Of course the person telling you has good intentions.


Not sure if you're being sarcastic, but there's a whole class of attacks that begin under the guise of "helping" the user.

A hilarious and interesting example: https://www.gimletmedia.com/reply-all/long-distance

Additionally: see all the drama and issues that consistently occur surrounding bug bounty payments, secure disclosure of vulnerabilities, etc.


A few months ago I received a random confirmation of my order from one of those fast food ordering aggregator sites, based in a different country that I'd never used.

I logged in and reset the password to gibberish and emailed them to let them know what had happened, assuming user error (email was a firstlast@domain, so relatively easy to mess up I guess)

A couple of days later I received an email from the company asking for my photo ID. I politely said I wouldn't feel comfortable providing that and advised they get email confirmation from users.

I didn't hear back for a couple of weeks and thought nothing more of it. Then a notification that 'my' payment to a fast food place had bounced (or been charged back, it was hard to work out tbh). I figured I'd ignore it because extradition to the states over $36 seemed unlikely.

A few days later I get another mail from them replying to my earlier mail about email confirmation and not mentioning the charges. Never heard any more from them.

The whole thing was a bit odd, but I can't help but think letting them know early saved us all a whole bunch of hassle, and maybe they'll fix their registration flow.


“I wanted to stay insecure dam it” - Them probably


Obligatory: https://xkcd.com/1172


It’s an intrusion.

Would you be outraged if you came home one day and there was a plumber fixing your sink? “Oh hi, don’t worry about me, just fixing your sink. Let myself in, hope you don’t mind”

You didn’t even know your sink was leaky let alone called a plumber.


I have never found this analogy compelling for two reasons.

First, your sink is not part of a botnet (assuming it's not a smartsink, I guess). By leaving your machine unpatched, you are causing harm to others.

This makes the ethics of this sort of grey-hat hacking much more murky IMO. I'm willing to concede that the grey-hat behaved unethically, but I also believe that leaving a machine unpatched makes the machine's owner at least somewhat responsible for how that machine is used.

Further, I do not think it's reasonable to both claim that this sort of grey-hat activity is unethical and also claim that owners of unpatched devices have absolutely zero responsibility for how their unpatched machines are used. I.e., if we condemn this grey hat (assuming he simply locked to door and left and did nothing else) then we should also condemn the owners of botnet'd devices for the way in which their negligence causes harm to others.

If others can't break in and fix your stuff when it starts effecting them, then you should be held at least partially responsible for how your stuff is used by criminals.

Second, physical presence can be a privacy intrusion on its own and without any willful intent. E.g., a grey-hat plumber who is purely altruistic might never-the-less accidentally catch a glimpse of you naked. On the other hand, cyber presence almost always requires intentional snooping to cause a privacy violation.


Its all about how its approached... and its sort of a little bit about how your personality is.

I came home one day to a note on my inner garage door: "you left your door open, I closed it for you ;)". No name, nothing... just someone entered my garage, wrote a note, closed it and left.

I took a quick survey to see if any of the obvious valuable items were disturbed or missing and none were. I was more upset with myself for letting that happen then a stranger "fixing" my security vulnerability.

EDIT: and another anecdote was that my neighbor let himself into my house once when a water leak was discovered outside so the water could be shut off. Saved me thousands in potential damages that he caught it early... I can't say I'd be all that upset finding a plumber under my sink fixing something but that's just me.


If your sink was clogged and running and flooding the apartment below you, it would be perfectly reasonable for someone to kick your door down and turn the water off.


I've never come home to an unexpected plumber, but did have a greyhat "fix" my stuff once, and wasn't outraged at all. It felt like an intrusion, but I was grateful.

In ~ 2002 I was off to college with my Linux workstation. IIRC, the vulnerability was in the CUPS web UI. Someone filled the volume with a giant /tmp/YOUR_SYSTEM_IS_INSECURE_UPDATE_NOW file, and shutdown the affected service.

It could have been much worse.


Actually I once heard a story of a neighbor who let themselves in when the house was literally flooding and he saved the owner thousands of dollars worth of damage. That's more like what's happening with these patched routers.

I also heard a story of a guy who's house burned down. The neighbor saw it very early and did nothing about it cuz not her problem. The homeowner was devastated.

So yes, if you see incredible destruction going on, it's ok to go fix it.


I've heard that in US you could be shot for trespassing. It might be very dangerous to try fixing it.


Similarly, unsolicited help with computer infrastructure can land you in jail under CFAA. No good deed goes unpunished.


That's why all good deeds must be done anonymous :)


Certainly in Texas I would be extra careful. Either way, have someone standing outside to advise the homeowner or cops what is going on. Also call the police ahead of time and tell them what you intend to do. Maybe even ask for an officer to assist.


Better get a friend with a firearm to stand guard at the door if you must take that kind of measure. US police are ill-disciplined, trigger-happy and uninformed about the law.


What I mean is, have the police enter the home for you. They will do this if there is a threat. In smaller towns, they will likely do this even if the threat is only financial damage to the home-owner.


> friend with a firearm at the door ... US police are trigger-happy

I cannot imagine why anyone would agree to be the first target on site. That seems like a very easy way to get killed or injured.


Also, how many times have you seen someone's fly undone and very quietly and unobtrusively informed them of the fact? You can get quite different reactions, everything from grateful thanks to "how dare you inform me that I am embarrassing myself" or just be ignored.

Different people will react differently to any help you may give them. In this case, one could possibly agree that getting these machines locked down so they longer present as a threat to others is the moral thing to do, irrespective of the legality of the action.

But that is a judgement call for the individual to make knowing that there are potential consequences for their actions.


This is not a valid analogy, unless you broken sink was causing your neighbors sinks to break as well.

And in the case of an multi-tenant building, if one person's actions (or lack of action) was causing problems for other tenants, you can safely assume the landlord would let themselves in to fix it.


I don't have a strong opinion about this case, but it would be more correct to compare it to this situation:

You come home one day, entering by the main door as usual, and when going down in the garage you notice the key on the floor with a message saying: "Your garage door were not locked and everybody knew about it, so I came in to take the key on the inner lock, closed it from outside and slipped the key back under the door so nobody else with bad intentions can enter anymore".

In the situation you described I would be pissed off, but not in this one, and IMHO it is closer to this case.


If someone slides my key under the door I’m changing the locks

Which I guess is something a gray hat would be happy to see


There's the added wrinkle of "... and there's a band of criminals in the area that we know likes to secretly hide in the remote, infrequently used closets of people's houses, where they plot and execute break-ins, thefts, kidnappings, and acts of terrorism."


The fact is that the greyhat could intrude, which means anyone else could intrude. If you didn't want him there you should have patched your damn router. There is no point in being outraged because you have already been owned, and the grey did you a favor. Now, there's no need to thank them, but being mad is totally counterproductive. Learn the lesson, patch your router.


If your home had a ruptured pipe, that was spraying sewage all over the sidewalk, I think someone stepping onto your property, and turning the firehose of shit off would be behaving ethically.

Trespass to save people from themselves is one thing. Trespass to save the public is quite another.


In this case it would be getting back home seeing a note saying "Fixed :)".

So yes, it's actually fixed, but now you know that someone you don't know broke into your house without permission nor supervision and you don't know what he's done/seen/stolen in your house.


If it's an apartment building, a plumber NEEDS to go inside to fix your sink so that it doesn't flood apartment units under yours.

That's why fixing router vulnerabilities is so important: if left unfixed, botnets use them to cause harm to other people.


It is more like you returning from vacation to your apartment and see all your carpets torn up, huge dehumidifiers going at full blast and some trades people milling about because your toilet leaked and flooded your neighbor below.


That's an inappropriate analogy because it has nothing to do with security.

A more appropriate one would be a stranger changing your lock for you because vagrants have been going in and out of your house without you realising.

Now doesn't that sound more appropriate, good neighbourly and helpful? What do you have to be outraged about?

If you had a problem with strangers violating your property you should have fixed it yourself before it became common knowledge in the neighbourhood that your house is easy to walk in and out of without your consent.


You didn't know, but the mere presence of the plumber is evidence that it was faulty (since the plumber used the same security hole).

Also, the alternative in this case might be a water leak that ruins your whole house.


The plumber is not a locksmith, so no, I wouldn't particularly enjoy that.


I don't think a locksmith would be any better. How happy would you be if you came home and a locksmith had replaced all your locks because they were too easy to pick? Would you trust the new locks? Would you be worried about what else he did while he was there? Would you be upset that he didn't ask first?

If you think of this like physically accessing your house, it's going to seem bad. That's probably why people got upset.


If your house has a gas leak, in the UK the local gas distribution network has the right to enter your property to stop the leak because it may well cause damage, injury or even death.


Isn't it more like a locksmith letting himself in because your lock wasn't working as it should and he fixed it?

Without going any further in than he had to and without charging you? I'd probably be a bit weirded out but quite pleased as long as he didn't hang around!

I guess this is part of the issue... even for people who have an understanding of it, it's a nuanced topic and the analogies are widespread but often misleading, because they're analogous. I'd imagine most people don't even care so long as they can access facebook.


I think it it's more like someone locking your front door.


Occasionally I see a car with the window down, that someone appears to have parked and left mistakenly accessible. When I was younger I'd have opened the door and wound the window up (checking for pets and other occupants obvs). Now, I leave the car alone because if someone comes back at the wrong moment it just looks like you're breaking in.


This particular effort seems to be a mix of fun, braggadocio, and altruism. Could this sort of thing be organized with a social network and a list of tasks/problems using a tool like Trello or Jira but for solving any problem? The result would be anyone in the world could help/volunteer to fix real problems with free time. Use: 1) Problem is posted 2) Investigated and confirmed to be real 3) Volunteers start to fix and swap solutions 4) Extra people are recruited as necessary 5) Problem is solved and wrapped up. This could be applied from everything from MongoDB security issues (https://snyk.io/blog/mongodb-hack-and-secure-defaults) to cleaning up neighborhood pollution (http://www.chicagotribune.com/news/local/breaking/ct-chicago...) Thoughts? Does this exist already?


Yes... volunteer work already exists. Suggesting that it could be organized through Trello or Jira doesn't really add any revolutionary element. And many volunteer organisations already have planning solutions.


Why so negative?


This is not at all negative.

Imagine parent telling some underground rebel group that their revolution would be more successful if they organized it with Jira.

Meanwhile, this concern is so far away from the rebels, who are doing just fine with pen and paper, and are more concerned with basic needs like surviving undetected.

People are of course excited by this initiative, and wish to contribute how they know. Except what they have is hammers, and there are no nails to be seen.

It looks like you are helping, but you are only diluting the focus from what's important to what's easy to mindlessly talk about in a forum.

It looks like people are trying to organize how to organize, instead of actually organizing anything. It's like the difference between being a writer and a word processing expert.


I think you have a point, but unfortunately I didn't get it from your first comment as well. It read as a pretty negative comment.

It sounds like the point that you are making is reasonable, though, and unfortunately one that I see play out with a lot of FOSS projects as well. I remember a talk one time where a project lead essentially made the point that every new talk is met with a lot of "I'll setup CI for you" and "I'll setup JIRA for you", but that none of the people who say those things end up contributing code or issues.

For some reason there is a natural desire among some to organize the organizing before the thing to be organized really exists.


Note: I was not the original poster: his comment simply rang very true to me; "lean" is being a motif in my work, as the complexity of precious time and resource management increases.

Contributions are all well-intentioned, but they cost resources, especially if you're not great at ruthlessly filtering out, or don't want to, for any reason; they generate a lot of heat where this energy can't be used.

Also well-intentioned contributors will set up grandiose structures, with no intention other than "to help", but no actual will to carry the actual work out. This usually turns out a wasteland after a while, which is not so much a problem until you realize you have to support it; or worst, it over-shadows the original, leaner-but-actually-productive intent.

> For some reason there is a natural desire among some to organize the organizing before the thing to be organized really exists.

I think this is why we have so many engines which have no games written for it :D


It is amazing how closely this matches my experiences. I've been on projects where we were forced to accept "gifted" code that was a tremendous difficulty to actual maintain and fix to a maintainable state. Of course, the whole time lots of people wondered why it couldn't just be merged without testing or anything.

It is very easy for software contributions to create lots of friction and your analogy to heat and energy loss is really great, IMO.


Indeed, I try to avoid such good-intentioned bikeshedding by providing anecdotal solutions and listing tools I found helpful. That way, someone with a similar issue may find something useful or provide better advice to me.


Not true, you are wrong. Better organization leads to a focus that can solve problems at a greater scale and with easier access to solutions. Your metaphor sucks as well.


First, you need something to organize.

You can't optimize what's not there.


More often than not, the opportunities for people to help vastly outscales the amount of hours volunteers are able to put up to help. Anyone can clean up trash on the side of the road and it's a no-skill job, so why is it still there? Open source projects everywhere need help, but a lot of them languish unsupported anyways.

Most people likely know countless ways they could help, but the time to do so doesn't match up with the need that's out there.


This is the question I was really getting at: where is the site where I can go help beyond the ways that most people already know about? Where I can I volunteer my services to fix a posted problem with MikroTik routers being insecure? Where is the site with a giant list of problems and descriptions of the people needed to solve them? And matching algorithms to put the two together?

More to the point: where can someone who has a totally unique skillset help solve a specific problem?

The assumption I'm making (which may be faulty) is that everyone has a unique and valuable skillset that may only apply to specific problems. (Like for example, I know nothing about B list celebrities from 1950s Hollywood. But if someone had a problem to solve that involved knowledge of that period, they could post the problem and match it to a profile of a Movie professor, or just a regular person who happens to know alot about that subject. The technical complexities of proving someone knows what they are talking about and their authority can be trusted, basically filtering spam contributions, is the biggest technical challenge. But it seems like new tools like machine learning and neural nets could help us here.)

We all know we can help do basic low-skilled stuff: donate money, volunteer at homeless shelter, build houses habitat for humanity, volunteer at local garden, trash cleanup etc.

What website do I go to that collects all of these ways to help and more?

For ex, what website do I go to for helping in these scenarios:

1) I understand politics and want to work on bills in various countries to stop climate change?

2) I understand biology and want to cure the algae bloom I saw in my local lake yesterday?

3) I understand the physics of mechanical design and want to help fix a design flaw in a pair of garden shears that keeps cutting my skin between my thumb and first finger?

Or approach it the other way around: what site can I go to, register my interests and skills and get assigned to existing projects that will change the world? Where I can help anytime I have free time. Or get an assignment in my email?

Scenarios:

1) I sign up on the site saying I have a biology degree and live in Napa Valley, CA and can do environmental stuff. I get assigned to take Cesium 137 readings to followup on the Fukushima disaster in wine country at various GPS coordinates and get sent a geiger counter, instructions, and training.

2) I have Crohns disease but am willing to wear a data collection monitor on my body and send the data to multiple companies developing new medicines or treatments.

3) I care about elephants and can volunteer to take a shift watching through the eyes of an automated drone that scans for poachers on the other side of the planet?

There are plenty of ways to do basic help in a semi-organized manner to treat the symptoms of chronic human problems. There is no site that I've seen where solving the largest problems of humanity can be crowdsourced.

Where I can I volunteer my services to help design water desalinization plants to make them 100X cheaper?


Archive Team does distributed grey-hat activities on volunteer-run computers all the time, but only for archival projects.


I remember once, when I had an unpatched computer overrun by hostile viruses that rendered it unusable, wondering if one of the botnet-type viruses that want to take over your computer and use it without being undetected would someday get smart enough to recognize other viruses and automatically remove their competition so that they could continue to silently infect you. Like a biological virus, one that kills the host is an ineffective one - they want to use the hosts resources while not being noticed. I wonder if this is something like that - virus writers starting to get smarter about cleaning up behind them and keeping out competing malware.


This has been happening for years, especially in the "botnet community". Either someone takes down someone else's botnet through the same bug and patches it or figures out a bug in the botnet and caps it for themselves (for example, getting ops in the CNC channel). I think Microsoft has even done in cooperation this a few times; it's dubious legal territory.

You can see some historical examples, both recent (Mirai had some viruses that went around closing the bug), as well as further in the past (there's one that escapes me, it must have been around 2010?)

I wish I could cite more, I'm going to spend some time researching this and make a list for myself, it's surprisingly interesting!


I've heard rumors that nation states will keep infected machines from crashing or attracting attention so that system administrators stay away from their beachhead.


botnets often uninstall other botnets, and patch vulnerabilities they use to get in, to close the door behind them.


And of course they did, frustratingly that behavior is industry standard even outside of malware.


This reminded me of last years "BrickerBot" malware [0] by grey-hat The Janit0r / The Doctor who bricked IoT devices with the stated purpose of preventing the same devices from being hacked by botnet-malware, which allegedly puts the whole internet at risk.

[0] https://www.bleepingcomputer.com/news/security/brickerbot-au...


Devices at vulnerable routerOS version and not already compromised would not be vulnerable if the firewall was enabled. It's that simple. Not great that these boxes used to ship in this default state and I can _understand_ a home user unfamiliar with what they're dealing with but what reason is there for deploying infrastructure this way at an ISP or hospital or whatever org?


I think there's still a lot of blame on Mikrotik for having such bugs in their management service and other daemons. I explicitly opened up the winbox port to be able to remotely manage Mikrotik routers I deploy (I considered their VPN implementations to be an even higher attack surface), as did many other admins it seems.

The winbox protocol supposedly runs over TLS and requires a username/password before anything is possible so I thought it should be safe enough, but through this bug anyone can download any file with no authentication (and the user db was storing passwords in plaintext which certainly didn't help)!

The web server vulnerability, sshd vulnerability, the smbd vulnerability - all are their fault. Had they used standard, well-tested open source packages there would be no problems, but they had to write their own custom implementations of these protocols for "reasons". I hate to think how many remotely exploitable bugs are lurking in their ipsec implementation.


Creating named address lists on Mikrotik routers is pretty trivial, so it's easy to create Remote_FW_Access_Allowed and add several remote IPs or netblocks to it. Then set up a firewall rule to allow Winbox (or other) port access from that address list (using an address list instead of a Src Address is on the Advanced tab when setting up the firewall rule).

Using source address lists with short timeouts it's also easy to set up port knocking - first port connection attempt adds to "Knock1" for 5 seconds, second port connection attempt from an IP on "Knock1" adds to "Knock2" for 5 seconds, (repeat for X knocks), connection attempt from an IP on "KnockX" adds to "Fully_Knocked" for (duration) (or "none static" for a permanent add). You can also do both a temporary add with a duration and a separate "Has_ever_knocked" with no timeout to build a list of all remote IPs that have ever fully knocked.

The UI could certainly be more friendly, but I think that's because they're avoiding having things that can only be set up from the command line.


Vulnerabilities are almost unavoidable.

Leaving a management port on a router open to the entire internet is a very bad practice. Would you leave an RDP port open to the world?

If you require remote access, at least restrict it to known management IP addresses.


Why is it that vulnerabilities are almost unavoidable? I’m not trying to be a smart-ass; I’m an analyst at an MSP and I’m doing my first pen-test soon. I’m under no illusions that my job title or growing responsibilities make me a security expert (or anywhere near it). Is it because the software stack is just too complex for network programmers to handle? (Not that router OSes are the only pieces of software that have vulnerabilities; and I imagine that you’d say that vulnerabilities are almost unavoidable in general.)


I'm not an expert either, so take this with a grain of salt. At the risk of sounding glib, I'd think the biggest cause of this unavoidability is that security professionals have to be "right" (in the sense of plugging every hole) every time, whereas black-hats need to be right (in the sense of finding said vulnerabilities) only once (or a few times depending on the vector, but you get the idea). Being on a Blue Team strikes me as a hard, thankless job, and I'm grateful for the people who volunteer for it.


In my book, the problem is that vulnerabilities are usually of two kinds - bugs or more specifically unintended and unexpected interactions between different subsystems. Bugs are like the use after free in a kernel modifying a little state, leading to ASLR circumvention leading to RCE.

Unintended system interactions are bigger in my opinion, since they tend to combine bugs across systems, or they even combine multiple unintended system interactions into bigger and more complex unintended system interactions. These things grow wild - some of the things people do with meltdown, rowhammer are wild and just enable even crazier things. On a higher level, things like server side request forgery, dns rebound attacks to circumvent firewalls are powerful tools to make existing attacks more powerful. I'm no where near an expert, just an interested admin, but a lot of these mechanics are wild.

Now where's the point to all that rambling?

Point is, most software is written and grown in very uncontrolled ways. Software outside of aviation or the space sector is written to get done, and if bugs occur, they do occur. A lot of software systems are running huge stacks with massive components - again to get done - and no one is scrutinizing all of the interactions going on in there.

With my product hat on, that's fine. Selling things is a good way to get paid. But from a security point of view, most software systems are just waiting to grow big enough until the right people care and it'll be ugly.

This is also why I largely consider our application servers to be overly resource hungry remote shells. Puts me in the right mindset.


Well, think about the number of abstractions on top of abstractions that make up all software. From the bits on the wire, being translated into binary, to machine, to higher-level languages. Then let's talk about frameworks on top of frameworks. Unless every contributor remembers every specific detail, edge-case, or assumption (and even if they manage to, we're still only human) then any mistake could potentially have disastrous ramifications. As bugs are unavoidable, you're going to have vulnerabilities. Vulnerabilities are just useful bugs.

Now, of course, at least bothering with CYOA is expected in security, but is rarely implemented up to snuff.... But then again, security is a "cost center", no?


There is a saying "If someone can make it, someone can break it"

This applies to physical security also.

There are way too many attack vectors for you to plug every possible hole.

20 years ago do you think anyone was considering that you could determine the contents of memory otherwise inaccessible your process just by reading the memory accessible to you in certain manners (Rowhammer, [1])

Or that a device taped under your desk could read your encryption keys right out of the air? [2][3]

Or that an attacker could intentionally cause errors by overclocking/undervolting "glitching" your device to cause it to skip certain instructions in order to gain access to it? [4][5][6]

Or that exploiting flaws in the way a CPU tries to predict the next instructions could lead to privileged information leakage? [7]

A sibling commenter hit the nail on the head. You have a large surface area to protect. They only need to find one tiny crack.

But by far the most common vulnerabilities are simply someone not properly validating input[8][9][10]<-(a ton of specific attack incidents listed here), or not allocating memory properly [11]

[1] https://en.wikipedia.org/wiki/Row_hammer

[2] https://www.theregister.co.uk/2015/06/20/tempest_radioshack/

[3] https://www.tau.ac.il/~tromer/radioexp/

[4] https://toothless.co/blog/bootloader-bypass-part1/

[5] https://av.tib.eu/media/32392

[6] https://www.multichannel.com/news/black-sunday-fix-dbs-pirat...

[7] https://www.wired.com/story/foreshadow-intel-secure-enclave-...

[8] https://www.pcworld.com/article/148007/security.html

[9] https://blog.detectify.com/2016/04/06/owasp-top-10-injection...

[10] https://codecurmudgeon.com/wp/sql-injection-hall-of-shame/

[11] https://engineering.purdue.edu/ResearchGroups/SmashGuard/BoF...


I've heard that this was the default for these routers. I have a RB951G-2HnD using NAT, and, although I patched it long ago, as far as I can tell the relevant ports were never open on the WAN side, nor any ports that I haven't manually forwarded. Was that not the default?


How long ago was it that Mikrotik shipped devices that listened on the WAN port? The Mikrotik hEX and RB3011 I bought last year most assuredly was not configured that way even though the version of ROS on them was many revisions out of date.


I've purchased a rack-mounted RB2011 and wAP ac, more commonly known as RBwAPG-5HacT2HnD (lol these model names), within the last 3 years that did not have firewall enabled and contained no firewall rules by default. If I had to guess it was around ROS 6.32 or 6.33 release for 2011 and recently for the wAP.


6.32 came out in 2015. 2011 was the early 5.x days


according to Mikrotik, the firewall is off by default, but the web interface also listens LAN only by default. this is a more secure configuration because it prevents people going "hurr durr firewall breaking muh videos" (in part because of shitty video games telling everybody to turn off their firewalls and antiviruses) and creating the security problems.


Back in college (early 90's) I would run ToneLoc overnight, and wake up to view the results. One time I had a hit, and I ftp'ed into some server on Mindspring's customer IP range.

A quick look at the system told me the root account had no password set. So I tried a telnet and wham - I was in.

A 'who' showed someone else was logged in. So I sent a broadcast message to them saying they need to secure their system better, and logged out.

I had the habit of using an obscure hotmail email to log into FTP servers, and had done that here. I was surprised to see an email from the owner - wanting to know how I found his system, etc. He was nice, but I didn't reply.


> He was nice, but I didn't reply.

Noooooo. Story started off well and then you tree fiddied me!


Not sure what they mean by "mysterious". He isn't hiding and never was. He posted his photo, name and other personal details in articles about MikroTik on Russian IT blogging platform[1]. His name is Alexey Sopov, 34, from Novosibirsk. Quick search revealed his social network accounts:

https://fb.com/100005153643926

https://vk.com/lmonoceros

[1] https://habr.com/post/353530/


Clickbait?


Probably the journalist didn't want to get this guy in trouble for doxxing him.

I'm pretty sure the FBI would love to arrest him by now. Just like MalwareTech.


The very last paragraph kinda makes me feel bad for MikroTik, but I'd like them to add an auto-update feature to their routers. Probably fix all these issues.


An "automatic" update that would potentially cause the router to reboot and bring down the network would go over very poorly with customers, even if it happens at 3 AM.

A better solution would be automatically checking for updates, and then sending an e-mail notification to the address associated with the router's owner/sys admin.

I "registered" my router and email address with Netgear about a year ago and I was shocked a few months ago when they actually sent me an e-mail to let me know that a new firmware update was available for my router.


I don't think automatic updates would be as disruptive as you think.

And having the ability to disable them and apply updates manually, combined with some forewarning like you are talking about (an email that says your router will restart tomorrow at 3am unless you do it sooner), would go a really long way.


A lot of Mikrotik's are installed at WISP's and other ISP's... making unplanned reboots very disruptive.

Those of us using them on our corporate networks might be inconvenienced by a temporary outage, but that's unlikely at 3am... however, scheduling and doing these manually is still the best way for enterprise gear.


I doubt that most ISPs who can't be bothered to apply security updates are going to notice a 5 minute reboot.

Split the difference - email the user that an update will apply on $date unless they do it first, or if they delay it (and don't let them delay it indefinitely).


You can already script something like this into RouterOS (Mikrotik's OS).


Good - they should make it the default!


Don't forget that a lot of the customer base for Mikrotik is in remote locations (ie: P2P connections in rural areas) or small ISPs. Having the router in your office die on you (even during office hours) is a little different than all your customers call you the same day their only internet connection is gone.


I used to be a customer of a remote WISP, P2P in a rural area.

I can't speak for all WISP's, but we only had service about 12 hours a day, less if it was raining. Five minutes to reboot a router would have been invisible.


I used to contract with a WISP. I would regularly get calls to "reboot the router" from the owner. The "router" was a fiber switch at their CO. I would just do it when they asked. I wasn't a customer nor did I get paid enough to fix their network. Sorry if that was your connection. :)


ISPs would just have to disable it before installing the router. Still seems like a good default.


> An "automatic" update that would potentially cause the router to reboot and bring down the network would go over very poorly with customers, even if it happens at 3 AM.

Maybe the the trigger for the automatic reboot could be more complicated than just a time-based trigger. Something like

    Reboot when
        localtime > 2AM & 
        localtime < 5AM & 
        traffic averaged over the last 5 min < 5kbs
Basically reboot unless the router detects the network is being used actively.


Of course, if you're on vacation and relying on that router to be available for security cameras, an automatic firmware update that results in a bricked router can be more than a little disruptive.


It's a tradeoff. You have to balance that negative against the negative of having botnets of millions of never-patched routers.

Automatic updates should be the default, but you should be able to shut them off if you want to make a different tradeoff.


Automatic security updates should be the default, all other updates should absolutely not. In case of patching routers there isn't much crapware to be upsold, but in general, if we're ever going to develop some code of ethics in this industry, I wish a part of it would be a rule of hard separation between security patches and feature updates, and another rule that the latter should never be done automatically without explicit opt-in.

Yes, it's extra work for developers, but the result of not doing that is the present situation - a lot of users, including a surprisingly large population of non-tech-savvy people, will go out of their way to shut down automatic updates, to avoid having to deal with broken workflows, upselling, ads sneaking in, and forced reboots in the middle of a business presentation or a game (or a surgery).


Automatic updates has some of the same issues as telemetry. Windows Update for example has to send information on things like drivers to scan for updates.


An update shouldn't brick a well built router; that's what watchdogs and secondary flash is for.


What about links that need to be available for failover or during emergencies? What about organizations that operate at those hours? I used to work at a 24 hour retail chain, and some stores in mining towns had their busiest hours around 4AM as busloads of miners came in to shop before the day started. We could _never_ upgrade those stores in the early morning hours.


So you're saying the defaults should be setup for the unusual use cases like you describe, even if that means we get botnets of millions of routers?

You're not going to define one set of secure-by-default rules that's going to work for everyone. Rather, you want to try to define a set of secure-by-default rules that work for most people. Then but the burden of reconfiguration and maintenance on those with unusual needs, rather than the majority.


Mikrotik's aren't really consumer-grade hardware (most Mikrotik's that is). Some operators deliberately stay a version or so back off the latest due to features breaking or instability, or requiring configuration changes, etc.

Automatic updating could be crippling to ISP operators (Mikrotik's are very popular with WISP's, and other smaller ISP operators).

> Basically reboot unless the router detects the network is being used actively.

For the average Mikrotik router, deployed at some WISP or small ISP, that's unlikely to happen.


You would only need to reboot if the kernel got updated. Otherwise just restart the affected services.

And kernel updates can be made faster with kexec so you don't have to reinitialize the hardware. The flashing procedure itself could also be made interruption-free with dual flash, which most sytems have anyway to avoid bricking the system.

It would take some effort to make it fast, but I think the update interruption could be brought down to the second-range. You'd still lose NAT state but that would only affect long-lasting sessions like SSH.


Unfortunately Mikrotik updates tend to change a lot of things and potentially cause issues. Recently the whole bridge implementation was rewritten which required config changes if you had anything beyond the basic bridge/port setup. And the last "stable" update was bricking certain models by making them unbootable. If they were applied automatically then there would be lots of broken routers...


Yep. They have serious regression issues. Sometimes things that are fixed get broken again, and updates often break other things.

Other vendors have similar issues, so it's not just a Mikrotik problem, but while with other vendors it's uncommon, I almost expect it from Mikrotik.


If they thought their consumer users would just go out of their way to update their routers .... that's just not understanding how humans work.


I have this functionality scripted for my mikrotik routers, and it emails me when it's updating.


Reminds me of a black hat I knew in the 90s. He bragged that if he ever gained access to a system, he'd start patching vulnerabilities so others wouldn't gain access and make it obvious the machine had been compromised.


I can attest that there was at least one such black hat in the 00's.

One of my IT guy's mom complained about her machine being slow and having "too many pop ups", so he planned to go to her place on the weekend and fix it. She called back a couple of days later and told him not to bother as it was "all fixed now".

o.O

I lent him one of our loaner laptops and he brought in her computer back and put it on our test DMZ to see what was up. Yep, somebody had scrubbed all the malware and "search bars" off the machine and installed a free anti-virus package. The exceptions on the anti-virus made it easy to track down what was happening; it was set to send spam every night between 1am and 7am but otherwise was pristine.

My colleague had to do some serious soul searching before he decided to wipe it instead of just returning it ...


> As for MikroTik, the Latvian company has been one of the most responsive vendors in terms of security flaws, fixing issues within hours or days, compared to the months that some other router vendors tend to take. It would be unfair to blame this situation on them. Patches have been available for months, but, yet again, it is ISPs and home users who are failing to take advantage of them.

A system that requires users to opt in to security is not a secure system.


As opposed to Microsoft ramming updates down your throat whether you like them or not, and whether they break things or not?

The lock on your house door is 'opt-in'. If you don't lock it and someone steals something, is it the lock manufacturer's fault? The home builder's fault? Did they construct an insecure house? Ignorance of the proper operation of the lock is not an excuse not to lock it and doesn't shift the responsibility to someone else.

I don't know why people happily assume everything in IT is secure by default when in the real world, almost nothing is secure by default.


The pervasiveness of insecure systems is not a justification for designing an insecure system.


Indeed. Quite the opposite. When working with multiple layers of systems that will eventually have vulnerabilities discovering, you must design your environment to remain secure regardless.

The chances of someone infiltrating a properly designed(and maintained) environment without detection are very slim.

Defense in depth.


It takes effort to purchase, install, and configure MikroTik products rather than just using the router/AP combo you got from your ISP. Anyone who has done that is capable of patching occasionally. [EDIT:] And actually now that I've read TFA I learn that many of these are actually "edge" routers used by ISPs on their own premises. There's no excuse for an ISP not to keep up with patches.



OTOH people would scream bloody murder at a router that installed firmware updates and rebooted itself without asking. Just look at the reaction to how Windows 10 handles updates.


One PC operating system's poor implementation of automatic updates does not mean automatic updates can't be implemented gracefully on a router.


It's time MT got something like unattended upgrades.


It's trivial to script on a mikrotik. Arguably it should be in the default config - most people who know what they're doing will start with a blank slate anyway (system reset configuration nodefaults)


If only ISPs would start disconnecting negligent customers who continue to use exploited or vulnerable equipment. The incentives are not right. If they did, they’d risk losing a paying customer, if they don’t, nothing bad really happens to them. I hate suggesting regulations and fines but it’s the only thing end users and ISPs will respond to.


Most of this devices are provided by ISPs so it's they who are negligent. :)


Original post from Alexey at habr.com (russian IT community): https://habr.com/post/424433/


Would have been nice if he got around to the router at our office, we got cryptojacked: https://imgur.com/a/Q7Pmxth


Some black hats also patch the systems they compromise. Depends on the purpose.


Normally HN is not a place for memes, but I feel this is appropriate

https://m.imgur.com/JxH0lUT?r


Just start bricking them. It's a public service to the internet as a whole.


Hats off to this guy.


I had one once and I can recommend the "MikroTik Security Guide" by Tyler Hart.

It's quite an easy step by step guide.


This is hollywood material.


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=18201572 and marked it off-topic.


You seem to be outraged about culture itself.

Fuxy 64 days ago [flagged]

Not really i'm just trying to give some helpful criticism however I lost hope in humanity a long time ago...

I'm just watching it burn with popcorn in hand.

Maybe add some more fuel to the fire in the form of sarcasm that everyone doesn't even realise is sarcasm... sigh...


Could you please not post unsubstantive comments to HN?


It's not a matter of being offended if you just want some gray-hat to mind his own business.


Patch me daddy




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: