How does this coincide with the 3.1.1 release though? I was still on 3.0.15 and there are new features and other fixes in 3.1.x. Do the new features and the other fixes happen to be reasonably tested and ready already?
Was it really all about the “level of concern”, as you say, and you wouldn’t have changed this without the exposure?
He was (made) aware of the problem and chose to keep the default.
First made it sound like a very specific issue for that one user, and was given a way to disable it for him.
Second made it sound like an entire class of users would need it off, and an option switch was provided.
What do you think is more probable, that he intentionally left that hole open despite not directly benefitting from it, or simply that when he answered both original reports and provided an appropriate solution, he didn't fully think about the details and implications of what those issues meant?
Have you really never have a closed issue come back to you "went from bad to worse, abort!“ ?
Thanks for the amazing job you are doing on iTerm it's a joy to use.
sudo tcpdump -i en0 -s 5000 -n port 53
You just made me realize something, though. The Google and AWS examples you gave won't be able to do this, but if you set up wildcard DNS and tell DNS that you have your own nameserver via CNAME aliasing, you could make your software do a lookup for eg something like "bm9ib2R5IHdpbGwgZXZlciBub3RpY2UgaWYgSSB0cmFuc21pdCBkYXRhIGxpa2UgdGhpcyEKCg.example.com" and exfiltrate data via DNS request in the process. The server could then return 127.0.53.53 to mean "ACK; data received OK", whereas NXDOMAIN or any other error would mean to try again.
Hmmmmm. Wondering if I should delete this...
(I realize this is exactly how the Iodine DNS tunnel works. FWIW, freedns.afraid.org's free options are perfectly capable to get iodine working, I was very pleased to discover.)
It's pretty cool being able to do an rsync-over-DNS.
For ages I've been meaning to [figure out how to] report this to the iodine dev, but I actually set up iodine specifically to get a working network while I knew I'd briefly be visiting a hospital.
I discovered to my amusement that the (public!) hospital's IT infra is really, really good; I was trying to SSH directly on top of the iodine tunnel, and while the first few DNS requests associated with the connection setup would work and I'd get as far as getting a shell prompt, but everything would rapidly screech to a halt and jam up pretty much instantly after that; maybe I'd get a single character typed, then it would completely die. I figured I was looking at a remarkably well-put-together leaky-bucket implementation.
So I tried hacking usleep()s into likely-looking spots in the code, but that didn't seem to slow it down enough. iodine is a rather interesting program internally, and a quick overview while distractedly sitting in a waiting area wasn't entirely sufficient to figure out why I didn't seem to be slowing it down enough to be a problem.
Before this "production" test, I previously verified that iodine was working by running the client on an AWS box. IIRC, ping ran over the link for quite some time (less than an hour; many minutes) without a single hitch.
On another note, I found that iodine seemed utterly incapable of setting up a correctly-configured tunnel on my Arch (receiving/server) box; I always had to ifconfig the tunnel (I forget exactly) to make it work. Problem with that was, my ifconfig-ing only routed one specific IP address, iodine wanted to give connections their own discrete IPs, and old sessions that locked up would take a while to time out. So I made a gigantic hack-script that would repeatedly kill iodine over and over every 1.5 minutes if it didn't see an authenicated SSH login. Would be nice for everything to just work properly...
It may be better to do it like this:
sudo tcpdump -i any -s 0 -n port 53
EDIT: For those who want to know how to determine that in the future, hit a generic ELB hostname with SSL and you should get the cert mismatch warning back with details.
Don't worry George your responsiveness more than makes up for an honest mistake.
To everyone else: I would highly recommend if you make money on a daily basis using iTerm2 that you support his efforts:
Before you ask, yes I'm a patreon for George, his work inspires me.
But it means that every string that looks like a domain is transmitted unencrypted over the internet.
If your software deals with private user data, you must consider the side effects of every API you are calling. You can’t just transmit data somewhere and hope that everyone will do the right thing. Network traffic is monitored on lots of networks. Unless the data is encrypted and you have reason to trust the receiver, don’t send it.
And whatever you do, don’t send private data without user action. Users expect web requests to be made when clicking a link. They do not expect data to be transmitted when hovering the mouse over URLs.
I completely agree that one must take into account how that feature fits with your product, user expectations, and privacy, but none of that means a feature that works as intended is a security flaw.
As others have said, prefetching pages has been done for years. Checking to see if it's a valid domain isn't an unreasonable feature for iTerm's URL highlighting, and for those who would prefer it not, he's changed that behavior, but let's not pretend that it was some absurd use of data that no one could have reasonably predicted.
And lastly, let's also stop clutching our pearls about "transmitting data". IT did a DNS lookup, and while technically that means there was a transmission, it wasn't taking what people traditionally call user data and transmitting it to a foreign third party. It's perfectly reasonable to check a domain name against a DNS server. Maybe unwanted, yes, but not even remotely irrational or irresponsible.
If you leak user data accidentally, saying “that wasn’t my intent” doesn’t help much.
The important thing that you don’t understand is that there is a difference between a search field / url box, and a Terminal.
I absolutely expect my browser to make DNS queries for stuff I paste into the URL box.
I don’t expect my terminal emulator to make DNS queries for random strings displayed on screen that happen to match a regex.
> What happened: iTerm sent various things (including passwords) in plain text to my ISP's DNS server
iTerm was accidentally transmitting passwords in plain text via the network.
I'm pretty sure transmitting passwords in plain text isn't "working as intended".
Sure, you can go blame the user for not knowing that iTerm makes DNS queries when you hold down the command key.
But if you want to make secure software, you can't just tell the user it's their fault. You need to make sure that accidentally disclosing private information is not something that easily happens.
I cringe just thinking about implementing something like that.
n.b. for me, in iTerm2, I have to press cmd to make the links highlight before I can click on them - so there's little danger of accidentally opening something you didn't want to.
Doing any DNS before clicking on the links seems like overkill, though. My browser can tell me if the URL is garbage or not.
~ dig somerandomurlimadeup.com | grep 'Query time: '
;; Query time: 23 msec
I'm not saying that this is the case universally, but for many the typical lookup time will be very fast.
A WA crash was also presented.
Well that was a very interesting thread...
it does come with a time cost however--little popups every time you launch a new application or connect to a new service that you have to evaluate and handle, since you haven't set rules for them already.
but i do distinctly remember getting alerts for this and being annoyed enough to go figure out how to disable the feature in iterm2 (it was at least a couple years ago, so my memory is hazy).
maybe someone with more recent experience can clear up the mystery for us. =)
~4 hours from report to release.
More like 2 years! And at least 3 separate bug reports, roughly one year apart each!
Unless you are into alternative facts, of course, then, yeah, very prompt release engineering and vulnerability fixing!