Hacker News new | past | comments | ask | show | jobs | submit login
How a Hacker Proved Cops Used a Stingray to Find Him (politico.com)
624 points by mhb 8 months ago | hide | past | web | favorite | 158 comments



Something doesn’t seem to add up. First of all the story fails to mention that those devices do not only track the target, they track and record all phones in its range, which is a massive breach of privacy and the real issue with those devices.

Furthermore, they allegedly already had his ip, so why bother with a stingray? They could simply tell his cell carrier to provide them with all his location data (as well as further dates).

It seems to me that they went fishing with their stingrays because they didn’t in fact know his real IP address and only knew from a source or other mistakes his approximate whereabouts.

It wouldn’t surprise me if they only had his VPN’s IP and were just looking for everybody connecting to the VPN in the stingray range and this is how they found him.


From the article:

> Rigmaiden eventually pieced together the story of his capture. Police found him by tracking his Internet Protocol (IP) address online first, and then taking it to Verizon Wireless, the Internet service provider connected with the account. Verizon provided records that showed that the AirCard associated with the IP address was transmitting through certain cell towers in certain parts of Santa Clara. Likely by using a stingray, the police found the exact block of apartments where Rigmaiden lived.


> Furthermore, they allegedly already had his ip, so why bother with a stingray? They could simply tell his cell carrier to provide them with all his location data

Remember that getting subscriber data/metadata from ISPs requires a warrant, and that a single tower location could cover a 6-12 sq. km. area (plenty of space to hide in)


Can’t they use the towers to triangulate the exact position? So that area should be rather small. Furthermore, if they already had the IP of the offender, getting a search warrant should not be the problem at all. This tells me there is more to this story.


I think triangulation works best when devices do handoff between towers, as they query each one to see which signal is strongest. For a stationary device like a mobile broadband dongle, I'd expect it to never handoff.

Getting a search warrant is easy, but not getting one is even easier & leaves no paper trail.


My memories from flashing phones circa 2003-2004 may be a bit rusty (btw. back than "flashing" meant something completely different than nowadays ROM uploads, it was more akin to poking machine code memory locations in BASIC on 8-bits :) ), but if I recall correctly, the device periodically scans all the BTSs it can see and sends that list with signal strengths to the network (and the network decides which tower should the device connect to!). Something tells me that the device even stayed connected to multiple strongest BTSs at once, but that may be a false memory.

Of course, that was old GSM days, it may be something completely different today in 3G/4G/LTE, but so is on the other hand the location hardware and algorithms of carriers.

But no, no handoffs were needed to track you very precisely even back then. There were ways around, but given the poor opsec in the case, it is doubtful something like that was used here.


That's not true. Cell phones have been using clock synchronized triangulation since the mid 2000s.

https://en.wikipedia.org/wiki/Enhanced_9-1-1#Wireless_enhanc...


Keep in mind this was 10 years ago. Phone triangulation was much less accurate then.


I don't know about Verizon, but all the cellphone carriers I know have carrier-grade NAT set up. That would mean the public IP they have would correspond with many customers with different private IPs. The article says Verizon matched the IP address with a particular AirCard, but maybe that was an oversimplification in the article.


My guess is that Verizon matched the IMEI of the AirCard used on a particular IP address at a particular moment in time. Then the feds only had to scan for that IMEI within the geographic area provided by Verizon.


The salient bit:

Police found him by tracking his Internet Protocol (IP) address online first, and then taking it to Verizon Wireless, the Internet service provider connected with the account. Verizon provided records that showed that the AirCard associated with the IP address was transmitting through certain cell towers in certain parts of Santa Clara. Likely by using a stingray, the police found the exact block of apartments where Rigmaiden lived.


Lest anyone believe their positional privacy is at risk, even the Verizon routing prefix could have homed the cops onto which provider to drill down into.

The take-away here, is that end-to-end protocols by neccessity as currently written send the src IP in the packet. If we'd designed IP to send the src IP as a payload, and had encrypted payload (TLS style) and then only had the destination IP in the outer packet, things might be different.

I've asked this question over the years since the 1980s: Given that we thought source based routing was a thing, its understandable we designed for simpler times with src IP in the packet but given its not a thing now, and privacy is, why do we still send the src IP in the packet? It doesn't mean anything useful, to most agents. NAT/CGN devices don't actually care, they want you to be consistent about which interface you arrive on, and the 5-tuple has it, but other things could be used which have no end-to-end impact. Beyond the NAT/CGN boundary, nobody cares, routing is solely on the destination and the ASN of the transits. Once at the destination, the payload is available, to find the originators address, to return things.

Src IP is not actually neccessary, in the IP layer. (in theory)


If the source IP is encrypted, the recipient needs to decrypt it first in order to be able to send a response. To decrypt it, it needs to either have some shared secret with the sender, or the sender needs to use the recipient's private key.

The parties cannot obtain the shared secret the usual way, the Diffie-Helman exchange. It cannot be performed, because it requires back-and-forth communication, which we are trying to establish in the first place. Of course, we'd still need to use certificates signed by public authorities to deal with the standard MitM concerns.

The alternative would be for the sender to use the recipient's public key to encrypt the source IP address. That would require a DNS-like system that would store and serve public keys corresponding to a given IP address you are trying to communicate with. This is something doable, especially in IPv6 world where you don't really have to reuse IPv6 addresses, but it leads to all kinds of practical problems, figuring out which is probably best left as an exercise for the reader.


cjdns[1] is an encrypted ipv6 overlay network that uses the firstbits of your public key hash to generate your ip address.

theoretically, multiple keys can share an ip-hash, but the handshake will fail if the intended recipient doesn't have the expected keypair[2]. so far this hasn't been an issue, but you are better off if you know the destination's pubkey beforehand.

[1]: https://github.com/cjdelisle/cjdns

[2]: https://github.com/cjdelisle/cjdns/blob/master/doc/faq/doppl...


Just a half-baked idea: If you don't want to make a new request for every connection, and the recipient owns a huge block of IPv6 addresses, the recipient can create an asynchronous key pair where the public key is used as the IPv6 suffix. The United States Department of Defense owns /13. This provides 115 bits of public key information. The sender can use the IPv6 address as the public key.


115 bits of security is pretty good, for example 3DES, while not exactly a great choice today, is still considered secure and acceptable for use today, with its 112 bits of security. However, I know of no public key encryption scheme that would give acceptable security with only 115 bits of public key. With RSA, you need at least 1024 bits, and probably should rather use 2048. Elliptic curve cryptography is considerably better here, but an ECC key of length n stil gives you only n/2 bits of security, which would be way too little here.


2048 bits of RSA key is the same amount of security as 112 bits of symmentric key, though.


Yes, that’s my point — you need to use 2048 bits for key to get 128 bits of security, so there is no way to fit the public key in the IPv6 address.


Can't you do something with a key-stretching system [0] maybe, to turn 112 bits of IPV6 into 2048 bits of RSA input, though, since you definitely have the right amount of entropy?

0. https://en.wikipedia.org/wiki/Key_stretching


Nit-picking a bit/kind of augmenting your train of thought. You can have non-interactive Diffie-Helman key exchange (via a PKI). As you say, the client would need to know prior to that the public key/have access to the certificate and that would require certainly revamping the DNS approach we now have (even if we did not use DNS, we would still have to deal with DNS requests).


That's literally the second option I describe. What is your point?


Just wanted to clarify that the issue was not with the Diffie-Hellman key exchange itself, but the way we currently do DNS plus IP routing are the issues.

Technically, the client could encrypt their IP with the y=H(g^{scr}) key (result of the DH KE) and send (Enc(y, IP), g^{rc})-- where H is the oracle, g^s server public key, rc <session nonce>*key (or generally 1 time key). The server can then compute the same key (g^{rc}^s and then apply H) and decrypt.

So the second and third paragraph contradict a bit each other, which was the sole reason for my clarification. I am not stating you as wrong or anything; I was just augmenting, so other readers don't get the wrong impression and decouple Diffie-Hellman KE from public key cryptography.

P.S. The remaining issue is to make sure the server does not have to decrypt the IP for invalid connections etc (that is avoid DOS or leaking secrets as we are operating on not trusted ciphertext).


Your tone is unnecessarily harsh.


What you guys are describing is starting to sound more and more like onion routing.


Point is online security is a bad joke. You wrote it, you own it.

The real interesting question is what happens when you get owned by what you didn't write?


> but it leads to all kinds of practical problems, figuring out which is probably best left as an exercise for the reader.

Any examples? I honestly don't see any problems here.


That is a Huuuge number of IP's, at a very high request rate, required for every new connection. It is orders of magnitude greater than DNS, and would be expected to have much lower latency.

Throw onto that the trust issue - such IP certificate repositories are very quickly able to determine who is communicating with each other, and at what time. Whilst this is currently possible anyway, current methods are less centralised, more expensive, and more visible.

You'd have to go full onion


640k is all the memory you'll ever need.


ggm is referring to something more like every router using NAT and having enough room in the packet so they can store something there instead of main memory.


> Src IP is not actually neccessary, in the IP layer.

I like your thoughts, but I don't understand how it would have helped in the Hacker-Verizon-Stingray case.

First, an analogy: Suppose no one ever wrote a return address on postal letters. People wrote only the destination. The recipient has to open the envelope to see who wrote the letter. The postal system still works and it's way more private. However, if the authorities are watching your particular mailbox at your house, copying the outside of every incoming and outgoing letter, what privacy have you gained? They still know who you wrote to and who replied.

Now back to the Hacker-Verizon-Stingray case: When the hacker connects to Verizon, his phone must identify him as a Verizon customer at some protocol layer. The identifying info could be IMEI, IMSI, or Verizon account number. Otherwise anyone could use Verizon for free if there were no account. Likewise, when Verizon transmits info back to the hacker, Verizon has to know which cellphone it should go to, or at least which cell tower.

In your design, the outbound packets from hacker to Verizon contain cleartext destination IP, but the src IP is encrypted. Fine, but then the Stingray finds the Verizon account number associated with that connection -- that info being available at some protocol layer. The Stingray then watches for any connections from Verizon back to the hacker that use the same Verizon account number. The Stingray thus collects both sides of the connection just as before. Am I missing something?


I'm not an expert, but I believe what you're missing is the current legal gray area of meta data like IP addresses. Pulling account numbers associated with contracted accounts sounds more like it needs a warrant.


* Secure handshaking requires interactivity, unless you share secrets with your actual partner (no, your CA trust store isn't enough) in advance. So your first packet would leak it.

* To return ICMP error messages ("destination unreachable"), otherwise you'd have long timeouts.

* Ratelimiting outside the server (e.g. DDOS protection). Many ISPs do actually filter source IPs. (Of course you can't on the backbone, any there are plenty shady AS.)

* Leaving it off won't help against correlation attacks.

* Most applications will need it, so it makes sense to have it in the network layer instead of coming up with incompatible implementations above.


+RPF requires it to help prevent spoofing (BCP 38)


Like AS PATH this is one of the reasons people say but like path security in BGP, its not what people actually do very much. I like BCP38 and I like MANRS but.. traction is hard here.

I was making statements about the road not taken: we have dst IP in the packets, from before BCP existed.


In our thought-experiment world where each address has a public key that can be used to encrypt the payload data destined for it, the public key of the source can also be used to sign the data, ensuring the sender address isn't spoofed.


What validates that - the destination or intermediate devices? Spoofing is often just a means for volumetric DDoS attacks - if the destination is responsible for validate sources then we’re no better off there.


Presumably it would be validated by the destination, but that doesn't matter. The reason spoofing matters so much for volumetric DDOS attacks isn't due to spoofing the traffic sent directly to the target - it's because the target is spoofed as the source address in traffic sent to third-party amplifiers, that respond to the target.

If the third party amplifiers in this scenario can validate that the traffic is spoofed, it cuts out amplification attacks.


Your router requires it, but nobody else on the Internet needs it.


There's way too much baked into the current infrastructure (and two instances of layer 3) for this to be feasible. Access-control lists used for filtering, identifying candidate address for services like NAT, and critical data plane operations like path MTU discovery all rely on the source address being available in the IP header. Sure we could re-engineer things to not be this way, but at what cost?

Realistically (and unfortunately), if you don't want to be tracked then you're going to need to do some combination of tunneling, proxying, and encrypting.


Yes. It wasn't a statement about the future, as much as about the past.


Despite the gargantuan effort, you wouldn't actually gain substantial privacy from such a move. True, a single packet wouldn't identify both parties, but an exchange of packets in rapid succession, a TCP handshake or even an UDP stream would immediately leak the same information to any interested hop in the network.

Furthermore, eavesdropping for metadata on the line is one of the lesser privacy concerns for most people. What really matters are privacy violations by the providers higher up the stack, where the address of both endpoints must be known to enable communication.


How would you return errors if the destination is known to be unreachable?


Or how would you troubleshoot which hop was the source of a routing problem without including a source IP to send a message back to.

These kinds of discussions seem utterly divorced from the reality of networking to me.


Oh. we talked about that. You could include the origin-AS and each AS along the path would change it in flight, was one idea.

Which also didn't fly for obvious reasons. src,dst pair as part of a tuple was just simpler.


AS != hop, especially when BGP isn't even present (e.g. Large private networks).


Putting the return IP encrypted in the payload breaks NAT (see FTP helpers).

You do understand that the IP that the return traffic is destined for has to initially be addressed to the IP of the NAT device, not what the client thinks its IP is, right?


And nothing of value was lost :)


Well except nat doesn't work and the ipv4 internet (which is the vast majority of it) is broken. I wouldn't call the vast majority of the internet something without value.


Yes, this was the key fail. If he'd been careful enough not to leak his IP address, he would arguably have remained free.

I was, for example, using VPN services and Tor well before 2008. And I've never been more than a gray-hat hobbyist sort of "hacker". Anyone seriously into criminal activity who didn't reliably hide their IP address was a fool, even then.

My point isn't to dump on Rigmaiden. It's just that articles like this contribute to the FUD about privacy being impossible now. The reality is that most criminals have horrible OPSEC. Especially when they're just getting started. And then they're careless about historical connections.


What anonymous methods would he have had of buying VPN access? Mailing cash? And if there wasn't stingray technology available, all the IP address could tell them was he was in the San Jose area, which the mail already told them.


IMHO, it's less about privacy being impossible, and more about privacy being difficult and expensive.


I don't dispute that it's difficult. But expensive? How so?


Any decent VPN service has a monthly cost. That isn't trivial for everyone.


One can get decent VPNs for ~$3 per month. The free version of SecurityKISS is good too, although it's capped at 300MB per day, and offers only 5-10 exit IP addresses.


> Likely by using a stingray

Article title: “proved”


The hacker was exposed because of poor OPSEC (due to tracking of his IP address).

> Rigmaiden had received boxes and boxes of criminal discovery that would help him understand how the government planned to prosecute its case. In the penultimate box, he saw the word “stingray” in a set of notes.

The authorities were exposed because of poor OPSEC as well. They weren't supposed to ever mention “stingray”.


It's difficult to never mention something so important, even if it's a local department rule. And once something like stingrays are well known to the public then it can be a fair inference that they are used in cases where that is the simplest explanation for how the police found or tracked a suspect. Poor OPSEC on the part of the police is basically a given in a society where we have public trials and due process (including discovery).

Whether the law allows or should allow the use of such devices, and whether with or without a warrant is up for debate, but it can't allow hiding their use, not in an open society with due process. It was always bound to be the case that some judge would think so, that some police note would leak this, that some police office would testify about it, that a Snowden would leak it, or that the public would figure it out anyways (especially when it comes to active devices).

So I wouldn't blame bad OPSEC on the part of the police here for anything. (Not that you are. Just saying.)

The defendant in the story, BTW, is not very sympathetic. In general, for test cases, one wants a sympathetic defendant. That's because judges are at least somewhat biased, typically. A judge has to imagine a much more sympathetic defendant and set of circumstances in order to convince themselves to continue with a line of argument that leads to the defendant being cleared on a technicality.


The story didn't mention how they were able to get his IP address in the first place. That level of detail is important for this community!


He was filing fake tax returns. That probably exposed his IP in logs on government servers.


What on earth? Isn't using a VPN the bare minimum when you're doing something potentially illegal?


Maybe he did. Unless he's careful in picking his VPN provider they probably have some level of cooperation with the FBI.


He didn't, but getting anonymous access to a VPN wasn't exactly the simplest thing then. And if you think they would be unable to find the precise location of the aircard, it's not really adding any protection.


The safe bet is that every site that you access logs your IP address. At the very least.


Should have used the USPS


If I had to guess, I might say poor OPSEC is somewhat common...

EDIT: while we're here, do you have a single "start here" article/site for the basics of less-poor OPSEC?



All secrets eventually leak. It's a question of time. Not even state actors with unlimited resources can prevent secrets from leaking. So this wasn't bad operational security by authorities. It was a fundamentally flawed operation.

Security through obscurity has limited and unpredictable usefulness. Good OPSEC can delay a leak, maybe. But OPSEC is still much harder for defenders than attackers.

The article doesn't mention it, but SURELY agency planning about this particular secret covered the next steps to take when it leaked.

Ordinarily good OPSEC has defense in depth. Secrets should have limited useful lifetime. "Stingray" as a secret doesn't: once bad actors know their phone locations can be targeted, they can't un-know it.

It should be obvious to the holders of secrets when they leak, so they know they're compromised. Having this "Stingray" crop up in a big mess of bankers' boxes full of court docs isn't obvious.

Security by obscurity needs a plan B ready to roll at any time.

Much better is transparent security, where the tech is well known, the actual secrets have limited useful lifetimes (key-rotation and forward secrecy for example), and reasonable controls exist (search warrants in this case).

If law enforcement executives don't know this, they need to go back to school. But they probably do know it, and they're practicing security-by-obscurity on their plan B.

(I don't defend somebody who stole large quantities of taxpayers' money. Not at all. But the rule of law--search warrants--is vital.)


You are only correct if stingrays are only a recent invention. On the other hand I am pretty sure they used them for at least a decade now - which means police had managed to abuse the security through obscurity thing for a vast number of cases by now.


agreed, for somebody who seems intelligent and driven, seems like it would have been beneficial to put more effort in upfront. but then hindsight is 20/20, and it seems like to be good at OpSec is has to be your job (i.e. providing money so you can solely focus on it), or you have to be truly paranoid.


Stingrays were being used as early as the 1990s by federal law enforcement. They were used to help locate Kevin Mitnick in North Carolina.

Edit - I recall reading that years ago in Tsutomu Shimomura's book 'Takedown' (published in 1996). Outside of this, I have no other reference. It's a good read BTW. https://www.amazon.com/Takedown-Pursuit-Capture-Americas-Com...


Your assertion re early 1990s is backed up here: https://www.wired.com/2014/03/stingray/

"Use of stingray technology goes back at least 20 years [ <= 1994]. In a 2009 Utah case, an FBI agent described using a cell site emulator more than 300 times over a decade.... "


I seem to have gotten rid of my copy of Takedown, but Jonathan Littman writes in The Fugitive Game, paraphrasing John Markoff:

"...Shimomura was sitting in the passenger seat of a Raleigh Sprint technician's car, holding a cellular-frequency direction-finding antenna, and watching a 'signal-strength meter display its reading on a laptop computer screen.'"

This sounds, perhaps, functionally equivalent to a modern stingray, but I suspect it was not operating as a cell-site simulator. The hardware/software required at the time to "man in the middle" Mitnick's cellular calls would not have fit comfortably with Shimomura in the passenger seat of a car and would not have run smoothly on a mid-90's era laptop. Also, the bandwidth required to forward the connections would have only been achievable over directional microwave or landline which seems unsuitable for use in a moving vehicle. However, this was the dawn of digital cellular networks. The calls would not have been encrypted in any way at the time so tracking the source of specific emissions using triangulation would have been fairly trivial, especially with the assistance of a Sprint technician with access to the CDMA code Mitnick's handset was using at any given time.

Actually, I just checked and it seems Sprint didn't launch its PCS network until later that year[0] so it's possible the network in question was analog(?), making simply "listening in" even easier, without having to simulate anything.

[0]http://articles.baltimoresun.com/1995-11-16/business/1995320...


Actually, the software to do this kind of thing was actually what Mitnick was after!

It would be laughably easy by today’s standards. Cloning AMPs phones (with ESN/MIN from “trashing” and bootleg Motorola service software) was within the reach of bored teenagers, but the elusive “vampire phone” required decoding the control channel. This was “hard” at the time.

It could be done with the right service equipment or say, suitably hacked firmware for something like an OKI 900...

No stingray required, you could indeed do everything passively. Very different times. Today you could probably do it all by dragging a few blocks around in gnu radio’s grc tool.


They even use one in an episode of the wire (circa 2003).


This is a really old story. The article is new, but I don't see any new data. It should be labelled (2013).

hn.algolia.com/?query=rigmaiden


Read the whole article, it has newer parts of the story, it doesn't stop in 2013:

"Several months later, in April 2015, the New York Civil Liberties Union (the New York State chapter of the ACLU) managed to do what no one else could: successfully sue to obtain an unredacted copy of the NDA that the FBI had law enforcement agencies sign when they acquired stingrays"


This guy wasn't really a hacker, just someone who knew a little bit about tech and figured out a flawed system. I think that sums him up as a scammer instead of a hacker.


To be fair that is fundamentally what hacking is - noticing a flawed system an exploiting it. "Just scamming" would be convincing people that they need to use him to get tax refunds while he cashes it and takes a cut. Closer to fraud in many ways too really.

Really the term hacking could use more consistent sub-definitions per type. Social engineering is well established but there isn't even a uniform term for quickly conveying the nuance of say cracking DRM offline vs getting in a server.


This. He filed false tax returns with stolen identities, I fail to see any activities one would associate with a technical hack. I also have a hard time understanding why someone who is engaging in criminal acts has a reasonable expectation to privacy related to the commission of said crimes.


You can't know that a priori, though. The cops should not get to say "I want to track this cell phone, it's being used for a crime" without filing a warrant. That is pretty much exactly what warrants are for in other contexts, cell phones should not be treated differently.


the cops have to abide by the law, that's their weakness.


It seems like a terrible security model to "trust whatever cell site is in range." Are there any alternatives to this state of affairs?

For example, can your carrier supply you with a whitelist of their towers and then ignore everything else? Or the legitimacy of each tower could be signed cryptographically by the cell providers? Of course you have to trust the security infrastructure of your cell provider, but that seems slightly better than just trusting everything by default. (Disclaimer, I know nothing about cellular infrastructure...)


There is an Android app called IMSI-Catcher Detector[0] that is supposed to help you detect when you're connected to a stingray-type device. I ran it for around a year and it never once picked up on anything. I'm not involved on the project and can't personally say if it will catch anything or not, but it is open source[1].

[0] https://cellularprivacy.github.io/Android-IMSI-Catcher-Detec...

[1] https://github.com/CellularPrivacy/Android-IMSI-Catcher-Dete...


The police department in Reno, NV uses this in conjunction with thermal sensors from a helicopter in northern Reno and Sun Valley. Mostly for drug busts.


> "The Hacker began breathing more heavily."

Obviously this is fiction. Romance or thriller?


Is there a physical device that can provide VPN-only wifi connection, so that a laptop or wifi-only ipad (say) which were to connect to it would not risk ever exposing its IP?


You can configure your router to route all traffic through a VPN. It's a standard setting in all routers.


That is incorrect. Many, many routers in fact do not have a setting to route your traffic through a VPN. My Linksys for instance does not. Famously, people flash DD-WRT or Tomato on their routers to enable that.


Any Linux machine can do this using IPtables.


I haven't worked with this stuff in a couple years (subpoena'd cell records), but given the date of this stuff, I didn't think cell phone towers could give a precise location. My understanding of them was they each had three sectors, so you could see in what general area they were in. With multiple towers, you might be able to get a more accurate reading, but it makes it sound like StingRay can actually see in real time their position.


A mobile telephone transmits an electromagnetic signal. Direction finding and triangulation for such signals have been known practices since the 19th and early 20th centuries. Remember that this base station impostor is mobile, and that there can be more than one.


Filing fake tax returns online and thinking an AirCard keeps him anonymous and un-locatable hardly makes him a 'hacker' now does it.


What happened to the charges for this guy? Was the original search via stringrays ruled unconstitutional?


>By late January 2014, Rigmaiden and federal prosecutors reached a plea deal: He’d plead guilty and prosecutors would recommend that he be given a sentence of time served. The agreement was signed on April 9, 2014.

Like many cases involving stingrays, the charges were essentially dropped once challenged in court.


From the article, he pled guilty in exchange for time served.


Back in the day, beige box with laptop was one way to avoid being traced. Of course, this ran risk of being physically identified if you operated the laptop directly from the beige box location..


Jesus. I wanted to keep reading that article but half way through my phone was hot enough it was burning my fingers and 20% of my battery had disappeared. What on earth is Politico doing.


It's quite the thing. I profiled it with and without ads, and it looks like the ads are the culprit. There are two of them just sitting there using CPU time the entire time the page is open. (I had two Facebook ads that had animated text being typed. They continued to use 100% of the CPU even when the animation was complete.)

There should really be some sort of CPU/power budget enforced by one's phone on a per page basis. If there's no user action going on, a page should only be allowed to run a certain number of instructions.


It's always the adverts that screw up a site. Maybe it's mining crypto currency too?


I recommend FireFox Focus for mobile. It blocks JS and third party tracking by default. I wish they'd let you toggle between either of those options instead of both at once though.


Or normal FireFox with uBlock Origin if you want to maintain history, logins, etc.


Yeah -- my advice is to use a good adblocker/tracker blocker (ublock origin, ghostery etc). Power consumption/CPU load and memory usage for browsers change considerably.

P.S. Yes, people are paying power and time (cpu/personal) to watch ads to pay (?!) for websites and provide private information.


I keep JS turned off by default on my phone and make exceptions. Brave browser makes it super easy to do this. Politico was actually the reason I started doing this.


Just one of many great reasons to use Brave.


Wao, just installed Brave and it also has fingerprinting protection. Does that really works ? I didn’t find it in Firefox.


Probably crypto mining.


I thought I was imagining that! If somebody knows a good JavaScript profiler, I might send a screenshot of that to the author. Hopefully they will choose a better outlet next time, or fix the Politico site.


Tracking you. ;)


Firefox Focus is your friend.


I found focus to be woefully unfeatured compared to just using Firefox's incognito. Last time I tried it, you couldn't install any extensions (privacy badger, ublock origin, etc.) and also couldn't disable JS.


A condom ain't nothin' but a bridge for a crab, lad.


Ad networks need to add simple perf tracing to the ads they serve and boot off the high consuming ones.


How can he call himself a hacker when he doesn't know how to hide his ip?


He never did as far as I can tell, the FBI did presumable because they initially thought he was hacking and the name stuck.


He didn't call himself a hacker. The cops did.


I value the privacy so I don't own or use cellphone.


I lost my phone somewhat recently. It's been great: I read more books, and get a lot less distractions throughout the day. The privacy implication is a huge bonus.

The main drawback have been that people rarely label apartment doorbells anymore, so if you don't know which button to press you're in trouble. Another is getting hold of old friends if you don't know their email address (I don't have Facebook either).

Overall, I'm fairly content with the situation, and am seriously considering not getting a new one. Another benefit is that I'm more likely to bring my laptop if I go somewhere, and thus when I do connect to the internet and have time to kill, I often get work done instead of mindlessly browsing HN/reddit or playing games.


You must not be looking for work, or for that matter have a job.

My phone is indispensable for the kind of work that I do, I literally couldn't do my job without it.


I don't see a strong reason for a software developer to need a phone:

- Async communication (slack/email) can be checked at your computer.

- Voice calls are usually done with your computer anyway.

- If you are on call, an old-fashioned pager can be used.

Frankly, although I do have a smartphone, I wouldn't want a job where I was required to use it, or one where I was expected to be available at all times unless on call.


Ditto. ~9 months into my current software developer gig and the only time I used it for work was when the company internet was down, and the data service was bad enough that I barely even used it for that. The idea that they anyone "must not [...] for that matter have a job" if they don't miss their phone is bogus.


If email works fine, why have Slack?

Does Slack have better uptime than email?

Does the office have redundant ISPs?


Ha, funny you should mention that. I am in fact looking for a job. Luckily the one interview I've had since losing the phone was on-site.

If I need to participate in on-call rotation or similar, I expect the employer to issue a phone. For a remote job, I will obviously get one myself -- but then strictly for job-related activities.

(by the way, if anyone is looking for an experienced infrastructure engineer/"DevOps" guy, give me a cal...errh email)


Interesting calculus of choices there: I don’t miss the distraction or privacy implications of my personal phone, yet I expect a company to issue me a phone which comes with distraction and privacy implications (the subject surveillance of the article disregards whether a pocketed phone is corporate because it can work with numbers directly), and I also don’t maintain a landline to sit for a phone screen to find that opportunity in the first place, so I expect to talk to you over Zoom (ceding more privacy; surprise, cellular call content is legally sensitive for LE, video packets aren’t) or in person.

You probably don’t realize nor intend this, but that can be a large red flag for your candidacy from the other side of the table, particularly if the role involves security because you’re broadcasting a slight misjudgment of your threat vectors and exposure. The number of resumes most folks go through, expecting ravens from Winterfell will get you dropped fast. Torvalds could probably get away with making initial hailing frequencies that difficult, but you or I should just buy a phone number of some kind, as much as it sucks.

The phone isn’t the problem if you apply opsec correctly, buy the right one, and operate it like a compromise hazard. (It is.)


A skype number or somesuch might do in a pinch even for phone interviews.


I went several years using only wifi services on my phone, including voice and text services if I really needed them. Unfortunately it didn't help me much as, being OCD,I tend to quickly find other things to focus too intently on.

Anyway, point is, if you need a temporary phone while job hunting, try google voice? I hate google now but this may help temporarily, or maybe you could find a better provider (other than google, that is).

Good luck with the job hunt!


There are a lot of jobs you can do without a phone. Like starbucks barista, or software engineer.

And the jobs that require a phone often come with a phone provided. So you still don’t need one yourself.


Sounds more like it's related to your work. I don't need a phone for my job at all, but it makes some things more convenient


OK, but most jobs do not require the use of cellphones. Aside from oncall periods, my software engineering position certainly does not.


> I value the privacy so I don't own or use cellphone.

This is a little extreme, but I've started turning off my phone or putting it in airplane mode when not expecting a call.

In addition to not being as distracted, I've had a marked decrease in spam calls - I think they tend to mark phones that repeatedly send them straight to VM as "cold".



The phone being off or it being on airplane is no longer enough.

It is known that complete operating systems run on every chip on that phone of which you don't have knowledge of or access to.

To think a software security solution provided by an OS, a pretty high-level abstraction when considering hardware, of the ability to turn off the radio is insane in these days and ages.

Furthermore with permanent batteries (or with a backup battery hidden inside) being off to the user doesn't mean anything either.


I'd love to read a source that describes this in more detail.

Assuming such an exploit exists, I don't think I'd be targeted with it. It's my understanding three letter agencies tend to hoard that sort of thing, not blast them at random privacy aficionados.


I remember reading an article years ago: FBI taps cell phone mic as eavesdropping tool (2006).

> the eavesdropping technique "functioned whether the phone was powered on or off."

https://www.cnet.com/news/fbi-taps-cell-phone-mic-as-eavesdr...


Thanks, that seems to apply to older phones.

I would hope that the FBI would not override airplane mode on a smart phone... what if they did so while a suspect was actually on a plane?

A warrant doesn't give police the right to endanger others IIRC...


> what if they did so while a suspect was actually on a plane?

Nothing would happen. Just like nothing happens to the thousands of people who forget to switch their phones to airplane mode every day.


Look up "Broadpwn" for an attack on Broadcom's BCM43xx radios.


Thanks, this is exactly the kind of thing I want to read up on!


Glad I could help!


I'm curious as to where this backup battery thing came from? Surely the likes of ifixit would notice extra batteries. If you're going to assert that such a thing is still hidden, then frankly, your mindset is such that there is no lengths you can go to protect yourself from... whatever?

I would think a phone with a removable battery would otherwise serve your concerns.


I didn't say backup batteries were in use or assert it as a thing or hidden.

The sentence was about permanent batteries with an aside about how technically a remove-able battery phone could still have power somewhere with it removed.


Surely someone with some RF gear could quickly determine if an iPhone, say, in Airplane Mode ever transmits.


Such a thing would likely be used on specific targeted individuals, not the general public, making detection far more difficult.


You really should place it in a metal box if you really want privacy while not using your cell phone.



I really wish I could do this 100%.

I leave it off most of the time, but I have to be on-call for my day job every few weeks and can't really not have a phone then and still keep my job.

It's a shame that phones are so closed and proprietary. I looked into changing a phone's IMEI number, and turns out that's illegal in most places and a serious no-no. How can there even be a law like that??


the probability of this affecting you if you're not a criminal is probably lower to that of you dying everytime you take your car to go anywhere. It's a really big sacrifice in expected utility.


Instead of "criminal", I think you mean "intentionally engaged in an illegal enterprise". From the context, I doubt you mean to restrict the statement to those already convicted of crimes. Beyond that, it's very tough to know if you're currently committing any crimes (or any non-criminal illegal activity), as we don't even know how many laws we have.[0]

I agree that the vast majority of those that will have trouble are intentionally engaged in illegal activity. But, at any given time, there must be both fair fair number of people who are unintentionally committing illegal acts and those who are falsely believed by law enforcement to be committing illegal acts.

There is an argument to be made that everyone should be more careful about privacy, thereby increasing the cost (and opportunity cost) of invasive investigations and forcing law enforcement to be more selective about the degree of certainty they have before employing more invasive investigation techniques.

[0] https://www.youtube.com/watch?v=d-7o9xYp7eE


How did you post this comment?


Computers, ever heard of them ?


Parent’s point was probably that the connection used is still traceable


Except that:

1) If you do it from home, the govt already knows where you live, or you can use proxies, or Tor, or a VPN.

2) If you don't do it from home, you are practically anonymous if you change your mac address and use someone else's wireless network, which are ubiquitous today in airports, restaurants, etc.


Another thing that is ubiquitous today in airports, restaurants etc. is CCTV.


It's technically infeasible to locate someone via CCTV unless you have an accurate location already, for precisely this reason - there's a massive surfeit of data to search through, and current facial-recognition software is incapable of the task.


Sure, but the case here was "we have the IP address, we want to know who was behind it", which is certainly feasible.


> Another thing that is ubiquitous today in airports, restaurants etc. is CCTV.

Which is not retained indefinitely. If you're not committing a crime, that footage will be gone in 30-90 days.


Your phone is always on you. Your computer, less so.


Haha never :)

For someone so paranoid about privacy as to not even own a "dumb" phone, I was curious what type of computer/internet setup they would be using to work around all the privacy traps such as broswer fingerprinting, traffic analysis, deep OS and hardware exploits, linguistic analysis, etc. etc.


You could probably become effectively impossible to track by getting a Librem laptop (no Intel ME [1], hardware kill-switches), running some variant of Linux, using Tor or a secure VPN, and disabling Javascript (there's probably other ways to reduce fingerprinting, I'm no expert).

While it wouldn't erase every hint of your presence, it takes a correlation of data to track someone; disparate trackable info is insufficient.

[1] https://puri.sm/posts/purism-librem-laptops-completely-disab...




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: