Hacker News new | comments | show | ask | jobs | submit login
Gone in Six Characters: Short URLs Considered Harmful for Cloud Services (freedom-to-tinker.com)
495 points by ajdlinux on Apr 15, 2016 | hide | past | web | favorite | 165 comments



Also notice the different attitudes of affected services:

- OneDrive "[...] reiterated that the issues we discovered do not qualify as a security vulnerability"

- Google Maps "[...] responded immediately. All newly generated goo.gl/maps URLs have 11- or 12-character tokens, and Google deployed defenses to limit the scanning of the existing URLs."

Well done, Microsoft!


Microsoft's Security Response Center (MSRC as mentioned in the article) is terrible. I've reported a security issue in Outlook.com and Outlook for Office 365 in July, and every month I get an email from the case manager telling me that "the team is still working on this". Mind you, I reported a similar bug three years ago and it got fixed in "only" two weeks. In both cases it's a trivial fix with mitigations already in place. I'm not even mentioning how hard it is to talk to someone about a bounty.

I'm considering migrating away from Microsoft products because of this; they offer bounties for important bugs but the way they handle reports is horrible.


I worked with MSRC several times over the years and found them to be smart security professionals (and often former hackers) who care deeply about improving security.

I suggest you take the long view and compare how Microsoft handled security disclosures in the past ("That vulnerability is entirely theoretical.") compared with today (inviting hackers to their oncampus Bluehat conference, sponsoring CanSecWest, etc). Things could always get better, but they've come a long way.

More specifically, "only" 2 weeks to issue a security fix is actually pretty good for thick client/desktop software. It's less than ideal for something like a web app where they control all the machines that need to adopt the fix, but still. Also, the severity of reported issue is a factor in when something gets fixed.

Consider looking at something like rfp's RFPolicy if you'd like guidance on how to disclose in a reasonable, timely way


I completely agree with you, in general they have come a long way (bounties, culture, recognition...). They're just not there yet (by there in mean Facebook-grade responsiveness).

I was only voicing my personal experience, which has been very poor (maybe the team, or the seriousness of the bugs), but in general I have heard some good things, especially for truly critical issues.


>Facebook-grade responsiveness

When did Facebook become the pinnacle of security response? The last thing I read about them was pretty horrible. Much worse than the Microsoft response here.

http://exfiltrated.com/research-Instagram-RCE.php


I remember this story, and people familiar with the industry (tptackek and friends) explained the situation quite well why Facebook did the correct thing, although not fair for unfamiliar with the industry readers https://news.ycombinator.com/item?id=10754194


"only" 2 weeks to issue a security fix is actually pretty good for thick client/desktop software.

That's actually even more of a reason to migrate away from Microsoft.


I think you misread his comment — the issue was reported _years_ ago and then fixed just two weeks back.


You misread it! I reported one a long time ago, which was fixed in two weeks (so relatively quickly). Another, very similar, reported nearly a year ago, still isn't fixed.


Microsoft is huge, and the teams behave differently. I have reported two different problems I had with Visual Studio 2015, using the included feedback mechanism, and received back email responses in a couple of hours. In both cases I was able to solve the problem I had and they actually followed up a couple of days later to make sure the problem was definitely fixed.

True, none of those were security issues.


Except Microsoft is kinda right: it is not a security vulnerability. If somebody can access given url just by guessing it — well, it it unprotected, so implicitly it is supposed to be accessed by anyone. No matter how long it is. I could've found it via google, you might have exposed it by sending it via email/skype/whatever. For what it's worth I can bruteforce 12-character urls as easily: if you don't know where do you want to get, it doesn't matter which road you take, so as long as I get at least some valid urls by random guessing — it is virtually as insecure, as if every url would be valid.

Praising google and blaming microsoft on that is actually stupid and harmful — this "concerned about security" image is nothing but marketing move for naïve people. If shortened urls are twice as long as before, does it mean that now it's safe to have some confidential information assigned to them left without authentication? I bet it doesn't. But it sort of implies so. The url shortening service just got worse, by being twice less efficient in what it's supposed to do: shorten urls. But not more safe, not at all.


> If somebody can access given url just by guessing it — well, it it unprotected, so implicitly it is supposed to be accessed by anyone. No matter how long it is.

Let's see what happens when we apply this logic to cryptography...

If somebody can access you data just by guessing it's key — well, it it unprotected, so implicitly it is supposed to be accessed by anyone. No matter how long the key is.


Except you don't expose passwords to the third party (and bit.ly, email provider and stuff all are third parties), and if you do — it's your fault you've got fucked. And you can chose arbitrary length key or password, and do not get randomly generated 6/12 char password for you. And, uh, I almost forgot, passwords are supposed to protect some information, and urls are supposed to show somebody the road to where this information is stored. It's pretty much by definition, if you unwrap the abbreviature. That's why you have logins and passwords in the first place, not just longer logins: the first is supposed to be public (to be shared if need to be) and is used to access some resource, and the second is supposed to be kept private and is used to authenticate that access. And url shorteners are supposed to make urls short and nothing more.

And in fact your comment and the fact that you think these things are comparable is exactly what is dangerous about the situation, and not the damn short urls themselves. It's like some guys forcing drillmakers to start making drills out of rubber, because some idiot tried to pick his teeth with turned-on drill and got killed. See how dangerous drills are! Well, yeah, they are pretty dangerous, but how about just not picking your teeth with the fucking drill, and not applying "toothpicking logic to drillmaking"?

By the way, the guy with rubber drill still can kill himself with that — it's just less efficient for both drilling wood and killing idiots. Exactly like what it is with "longer shortened urls.


You left out this: "As of March of 2016, the URL shortening option is no longer available in the OneDrive interface, and the account traversal methodology described above no longer works." Both companies still have not solved the existing link problem. Edit: As someone said below, Google still allows the short method with more entropy which is still insecure. MS does not allow shortening.


What is insecure about more entropy + blocking scanners? By that rational any internet connected service with a password is insecure.


Except that, according to Microsoft, this change had absolutely nothing to do with the report, which they still consider to be not-a-vulnerability.


Maybe I'm missing something, but if the URL shortening feature doesn't exist anymore, how can it be a vulnerability?

Edit: The issue would be with existing links, but it's difficult to change existing links for obvious reasons. Both MS And Google at this point have calculated that deleting existing vulnerable links would be worse than the security issues presented here.


To give them the benefit of the doubt, what they could have meant was that it was an onerous feature that swapped privacy for convenience. Besides, scanning of URLs should be prevented by bit.ly in this case, no? What's MSFT to do?


The fact that anyone considers a shortened URL as a means to secure a piece of data outside of authentication saddens me. I think that should be the key point here. Did you just get a URL that gives you a bit of data without the need to authenticate? Then it's not secure.


I don't think people consider URL shortening as a security measure, more of a convenience measure. Also, you should bare in mind that many users of such services are not exactly technically savvy (especially compared to the HN crowd, which often gets forgotten here).

Given that, if a company is providing URL shortening for their product (such as One Drive), that should be their responsibility to protect the users.


How is a URL with that's not linked anywhere different from a password?


A secret kept in a URL is less likely to be treated as confidential, both by people and machines. For example, compared to a cookie, they're more likely to be shared or logged.


It is not. The problem is that short URLs are often only six characters long, so they are just as safe as a six character password (ie. not very safe)


Far less safe because you use a password in combination with a userID so to crack someone's account you must have their login name and password. Six characters are six characters, easy.


One difference is a GET is likely to have less/no rate limiting vs. a login POST, making it faster to brute force.


Are you confident that it doesn't appear anywhere else?

There's two main concerns: the first being that you use a URL differently---and think of it differently---than a password; and the URL persists in various forms.

Accessing a resource on the Web might be logged in numerous places---e.g. your web history and internal network logs or MITM if not over https---and might be sent as metadata, like a referer header to another website. Imagine if all of your passwords were logged by your client any time you entered them in.


I don't mind downvotes (and this appears to only have been downvoted once), but I do want to express legitimate concern over that: what I said is a standard security concern, and precisely why you don't put sensitive information in GET requests. I hope that others aren't dismissing this issue.


It's not just about short URLs. Several services, such as Flickr (and IIRC Dropbox), allow you to generate links which allow the recipient of the link to do something (view an album, download a file) without authenticating. This is of course not at all secure, however it is very convenient for sharing stuff with people who don't have an account with the site.


Note that '11- or 12-character tokens' really aren't secure either: assuming mixed-case and digits, that's only 65- to 71-bit security; if it's case-insensitive, that reduces to 56- to 62-bit security. That's still pretty easily enumerable.

For mixed-case and digits, one requires at least 22 truly-random characters to provide 128 bits of security (43 for 256-bit security).


We're talking number of URLs, not number of CPU iterations. Exploring 2^56 URLs is not that easy.


> Exploring 2^56 URLs is not that easy.

Not easy, but not impossible. And it's not the case that an attacker is looking for one interesting URL: he's looking for any interesting URLs. Depending on the number of URLs stored in the service, and the fraction which are at all interesting, it may very well be worth the attacker's bother.

As an aside, why the heck is my post so heavily downvoted? It's factual: the given lengths are not long enough to be secure from brute-forcing; they probably will be brute-forced.


The article states that Google has deployed defenses against brute forcing, so attempting to do slow at a rate that doesn't trip these defenses may not be worthwhile.

I must admit that I don't understand how the number of characters corresponds to bits of entropy. Know of a resource that explains this?


Napkin math incoming... Say you only used a-z in your token, at 1 character long you have 26^1 combinations, at 6 you have 26^6 or 308,915,776 combinations, which could easily be scanned. Increase the length to 26^ or 95,428,956,661,682,180, a big ass number, if we reserve 1,000,000,000,000 for actual items and create them over this range then the odds of guessing a correct token is 0.000000010479, then ban all the hosts which trip more than the average number of 404s.


> I must admit that I don't understand how the number of characters corresponds to bits of entropy. Know of a resource that explains this?

There are 26 letters; mixed-case doubles that for 52 choices; digits add 10 for a total of 62 possible choices for each character. That means that 2 characters have 62^2 possible configurations, 3 62^3 and so forth. If you take the resulting number and calculate the ceiling of its logarithm in base 2, you get the number of bits needed to represent it:

    (ceiling (log (expt 62 11) 2)) → 66 -0.5038376


    log(num_options)/log(2)
So for a 8 character digits-only value, num_options is 10^8, so log(power(10, 8))/log(2) = 30 bits. This means both 30 bits needed to store the value and 30 bits of security. It also works for octal or hexadecimal: just replace log(2) with log(n), like log(16) for hexadecimal.


You have your arguments swapped. 30 bits for 8^10, 26.6 for 10^8.

https://www.google.co.uk/search?q=log2(10%5E8)


First - assume the URLs use something like base64 encoding. That means 11 chars are 2^66, not 2^56 URLs.

In turn, that means the entire space is ~10^20 URLs.(smidgen less, 710^19)

Let's assume 1B users, with 1K URLs belonging to them on average. That's 10^12 URLs. Which means, on average, you query 10^8 URLs until you hit the first one.

Let's further assume you could actually query 10^5 URLs/s. That means a single* URL requires query rates for 20 minutes.

Sure, theoretically that's doable. Except you'd cause query rate and error rate to spike, and the setup to do so would be quite expensive.

So, in the best case, after those 20 minutes, you have a random picture of a dog, or a map.

Having a network capable of running 10K qps and risking detection to find, maybe, one picture every 20 minutes? There's just not the incentive to do that. There are many more interesting avenues for this.

And since it's bandwidth-bound, not CPU bound, that speed is not going to rapidly accelerate.


Exploring 2^56 URLs is possible.

Exploring it with anti-brute force detection from the keeper of said URLs? Good luck. They could easily limit any ip to 10 URLs per hour or something and make it impossible to scan.


Really? If you could make 1,000,000 requests per second, exploring 2^56 URLs would take you 2000 years.


> They could easily limit any ip to 10 URLs per hour or something and make it impossible to scan.

Then an attacker would just use a botnet. Granted, he can probably get interesting items off of the computers in his botnet too.


OneDrive has removed the short url. Verified in my account.


it says that in the article.

it also says existing urls are still vulnerable


> and Google deployed defenses to limit the scanning of the existing URLs

I find it amazing that they didn't do this in advance and needed to have this happen to know to fix it.

I remember way back in 1996 or 1997 being able to do URL scans of UPS shipping tracking numbers. It was trivial to bring up the ship to address for any given shipper (for all of there customers) as long as you had at least one of a shippers tracking numbers (then just alter to suit). [1]

[1] So in other words if I received a UPS package with a tracking number for ABC company I could easily see every else that they were shipping to including names, addresses.


There was a similar vulnerability with Delta boarding passes... in 2014.

http://gizmodo.com/hacker-says-url-trick-grants-access-other...


Microsoft famously used the term "non event" for several of their hick ups. Gotta give them credit for their retorical skills :)


This is breathtaking. This article is not only very important for the vulnerability it uncovers, but also it is one of very few articles that shows with very specific examples why breaches of privacy do matter.

Even if you 'don't have anything to hide', you don't want anyone to know that you sent someone the directions to a planned parenthood center. Not because you think it's a bad thing to do, but because the publicity of this information could be harmful to you.


I mean, doesn't that count as 'having something to hide'? I think we're in agreement though. As has been rehashed on HN many times, it's a bogus argument.


That's probably quite technically correct, although I think the wording "hide" implies you've done something wrong.

"If you've got nothing to protect..." definitely has a different feeling to it.


Absolutely. That's a much fairer phrasing of the 'nothing to hide' argument.

Next time someone presents the 'if you have nothing to hide argument', tell them to send you a video of each of their family members on the toilet.


Right! The ole toilet argument seems to win over everybody. I was in the Army for a long time, and right after 9/11 during our pre-mobilization for our first deployment, we mobilized out of Fort Stewart, in Georgia. We were put up in these awful billets from WW2. Just awful. Anyways, the bathrooms only had rows of toilets immediately behind rows of sinks. So if you went to shit in the morning (literally knee-to-knee with the guy shitting next to you), someone's ass was in your face as they brushed their teeth. It was at once one of the most enlightening moments of my life about the cultural concept of privacy that we have these days. We have an expectation of privacy that obviously they did not enjoy in days of yore. And we have got to fight to keep it.


Yes, instead of trying to defend privacy as "we don't have anything to hide, we just don't want it to be public", how about we admit that we DO have stuff to hide - all the time, and every single one of us. If the government wants to discover something then get an individual warrant upon probable cause, as the Constitution requires.

We're not supposed to let it know everything we do or say, while it is increasingly more secretive with its own actions. It should be the other way around.

But most of us have bought into the government's propaganda that "if you have something to hide, then you must be a criminal", which isn't that different from the argument that "if you oppose the war in Iraq (or elsewhere) then you're not a patriot, or you're a terrorist sympathizer".

Stop buying the U.S. government's bullshit propaganda. Start with rejecting it by default, and then maybe consider whether it's right or wrong, not by accepting it by default as true. We've been burned too many times trusting the government's lies. We should've learned our lesson by now.


The whole "having nothing to hide" argument is deflated by "then you have no reason to look".


doesn't that count as 'having something to hide'

It's the 'none of your business' facet of privacy.


Exactly. It's because privacy isn't criminality, it's vulnerability.

Even if people do manage to live 100% criminal free (which may not be possible, but even if it is), sharing information like addresses, account numbers, DNA, fingerprints, etc. makes one vulnerable.


Sharing a link with someone via email or chat - a private channel - suddenly becomes a share in a public channel because of the lack of entropy in the shortened link.

Even more surprising is the number of people on here who don't understand why this is problematic, essentially blaming the victim for not understanding that their private channel is leaking information. It certainly is not obvious to the general public, and wasn't obvious to the people who implemented these services, that a side-effect of a shortener with insufficient entropy is leaking information from private channels.


Yup. Even if Google and MS disabled their built-in shorteners or made them not-actually-short, if I send a URL over a Twitter DM, it will get shortened... and I doubt that twitter's url-shortening forwarder will restrict access to the url to sender and recipient.

Fundamentally, long-urls-as-security makes mathematical sense but doesn't actually work with the way the web behaves. URLs are shared constantly, freely.

If we had everybody using the same auth systems so we could do whitelists without "oh you need a google account" or "oh you need an MS account" then whitelists would be a solition, but In Real Life whitelists get in the way so instead people just rip out the security altogether and count on the security of the URL.

Imho, public-with-shared-password would be the right feature to add, even though it's anachronistic.


Is it reasonable to expect driving directions to be password protected? That example is used in the article and I think it's an excellent test case for thinking about alternative approaches.

Driving directions by themselves are usually pretty innocuous. When one end is something like an abortion clinic, it gets a bit more sensitive; when the other end is a specific house, even more so. And I don't think you can expect driving directions to ever get password protected.

Some links are like words - merely a reference; some are like sentences - describing some relation or fact; and some are like passwords - describing how to find or access something that you wouldn't otherwise know.

The first are not revealing when out of context; the last are always dubious. It's the middle case, describing a relation (like driving directions) that's most problematic.


It would be reasonable to offer the option when you hit the "share" button. I would assume people who care about privacy but are sharing with non-Google people would like the ability to add a shared password to the map.


I don't understand the use of URL shorteners for 99% of what people use them for.

Unless you're sending your link over a relatively short fixed-length limited medium like Twitter or SMS, there is no fucking point.

I've seen people post links to download apps, which go from their own site > some random bit.ly/etc URL > dropbox. I'm already seriously doubting if I want to run your app if you can't manage something better than dropbox file sharing, but to then rely on a bit.ly URL that could go fucking anywhere, when you're just putting the link on a webpage is beyond belief.


1) Let's say I am doing a presentation, the computer has internet connection but can't use USB drives, etc. So I would shorten the url (to, e.g., google drive file) and write down the token. Much better than logging into personal account on shared machine.

2) Analytics. I've seen many times different shortened urls used in different locations for the same campaign. I assume that makes it much easier to track your offline marketing channels.


Analytics Analytics Analytics. Shortened urls automatically have very decent analytics making it WAY easier than setting up UTM params or other campaign identifies.

Add a "+" to any bitly or goo.gl link and see what I mean. https://goo.gl/forms/9fA366pQ1f+

I use goo.gl to see how many people click a link I share on facebook too.


Wow that seems like information that someone might occasionally want to keep private.


Yeah I agree. Anytime I see a bitly or goo.gl I check the stats just for fun. I've seen some other shorteners that track other stuff, one even had ip addresses!


I'll use the shortened Google Maps link over the full URL param mess every single time. Why wouldn't I, article aside?

And bit.ly gives you analytics. That's their service.


Where are you putting that link that you can't just use the full link? Many places let you use <a> tags which lets it look not so ugly.


1. If I'm looking at a complicated webpage with dynamic content, it's quite often that simply copying and pasting the URL will not bring up the exact same state I'm viewing. Indeed, when I go somewhere in Google Maps, the url doesn't change even as I scroll and zoom around the map. But when I click the button to generate a shortened Google Maps URL, I know I'm getting a link to the view is it appears right now.

2. Long urls still sometimes break when handled by some email clients.


I constantly see short URLs used for, e.g., Facebook links, where the URL isn't displayed anyway, but rather a preview. I abhor them, because they add a lot of time to first displayed content on mobile -- on the order of 5 seconds.

I also strongly suspect that a lot of them are being used more for analytics than URL shortening, which also annoys me to no end.


They're good for Rick rolls.


There are services other than bit.ly (like shorte.st and adf.ly) that stick interstitials between clicking the link and reaching the destination. So revenue. That's one reason.


How is that a benefit for the person using the link shortener service?


Say i have a facebook page with 100k followers. I can then say "check out this awesome page about ....", put it through one of those services. I'll then get $x per 1,000 clicks. Of course, my facebook page readers will have to sit through a second or two of adverts, but i'll get $$.


With adfly you (as the person creating the short link) get a cut of the revenue.


if you need to type a URL manually, it could another reason


The ability to traverse the full content of a OneDrive account starting with a short URL and in some cases /upload malware to them/ which gets synced back to the user's computers is shocking. Even more shocking is that Microsoft apparently declared this to be as designed, not a security bug. That's some terrible software design.


As a developer who helped out with QA one of the most annoying things was reporting bugs and then getting into a debate about wether or not it was a bug as it was working as designed with the functionality was clearly terrible/broken.


It's like pushing rope. Then to really get the party started, throw in some "business analysts" and "scrum masters".


I thought the same, I thought it was just going to redirect to a standard login page.


If your security depends on someone not walking your DNS zone, you're doing something wrong.

If your security depends on someone not guessing a URL, shortened or not, you're doing something wrong.


> If your security depends on someone not guessing a URL, shortened or not, you're doing something wrong.

Why? Done properly, a secure URL should be as long and contain as much entropy as a strong password.

Are all systems which depend on someone not guessing your strong password wrong? Are practically all encryption schemes wrong?


URLs leak in a bunch of ways. Cached in browsers and proxies, sent in referers, sent in toolbar requests, logged in access.log files.

I'm not saying that it's necessarily wrong to use a long URL as security; sometimes the problem constraint means that is the only way to do it. I've done it on rare occasion. But I also made sure that management knew there was a risk here.


If that URL is being sent over an HTTPS link (which of course it should be) then the only two points that can log/cache it are the browser at the local end, and the server at the remote end.

The latter is up to server design, the former is an interesting point.

Browsers are unlikely to cache get parameters, but these things might end up in history etc..


The "Referer" header can also leak URLs.


> Are all systems which depend on someone not guessing your strong password wrong? Are practically all encryption schemes wrong?

No, but if your security system has a password and only one account shared by everyone where the password is shared as plain text (typically just emailed from person to person) without any sort of user access logging then it's got major problems.

With just a URL you have no idea who has access and the only way of revoking access to anyone is to permanently move the resource and then get back in contact with everyone who should have access by sending the new "password" around.

> Are practically all encryption schemes wrong?

An encryption scheme that involves the users sharing the only password through far less secure channels?


He's saying it shouldn't be the only layer of security.


How many layers of security should there be when you give someone driving directions to the abortion clinic?


Because access to that resource can be replicated by simply viewing network traffic logs. Sending authorization info in the headers over SSL would be much more safe.


First of all, any secure resource should only be accesses over SSL so I assume that.

The path is exactly as secure as authorization headers. Network logs will not show the path of SSL requests (it's encrypted).


URL based Bitcoin wallets proved this not to be the case. URLs get picked up by Omnibar, Skype, etc. they find their way into search results... I wouldn't even trust secret material in the fragment-id even though that in theory is safer.


Doesn't excluding them in robots.txt solve the search engine problem?


Following the rules laid out in a robots.txt file is optional. The reputable search engines tend to play by the rules but dodgy ones? Not so sure.


The dodgy ones don't have access to the URLs. The example was Skype links: if Microsoft scrapes those, they'll follow the rules and not make them available in searches.

If you give your link to a dodgy search engine, you've lost.


Honestly, there's something to be said for "shared passwords" for cases like this. I know that mathematically a public password adds no actual security over the long URL, but they're unlikely to be embedded in the link and propagated automatically as easily. But google only offers sharing write-access by account or publicly, not "public with password".

I should be able to just say "here's the link, the password is burp441toaster, no spaces all lowercase".


Password reset links are the widespread rebuttal to your statement.


they are one time use though


I would not be so hard on the matter, but yes, making private files available by url only seems like a bad security habit. Still, sometimes one wants to avoid Registration Wall to enhance usability (doodle.com is a nice example).

This is to say that the whole url-shortening business should still be improved, even though you're probably not going to use it for your private stuff where you want access control.


Yes! Just using longer URL's is just security though obscurity. Which just gives a "false sense" of security.


That's not what security through obscurity means. It's very possible for "unique urls as a password" to obey Kerckhoffs' second principle.

Craigslist does this and it's a great system for a Craigslist post. You can sign up for an account if you want but you can also post without an account and you get a unique url as a password to edit or delete your post. Craigslist posts are only good for 30 days and someone deleting your Craigslist post isn't the end of the world.

It's very possible to use urls as a password securely, password reset emails do it all the time.


Password reset tokens through email has always troubled me. Unless the email content is encrypted (and almost never is) then that token is exposed (imagine a compromised email server that harvests these tokens). Usually these things are time limited and you can know if another entity reset your password but it may be too late at that point.


In this case, Facebook is "doing something wrong" with every photo URL.


Scary, but isn't really more like 'using a trivially computable string as a trampoline for live authorisation tokens considered harmful for supposedly secured cloud services'.

Not as catchy, I admit.


I think short URLs are great for Twitter, sending people to public URLs, for other services where you're literally just shortening a public URL, if they want to include a tracking / redirect to harvest all your juicy habit information - but I don't think it's a great thing for private URLs.


If the true URL and shortened "URL" aren't managed by the same organization, we've added another point of flakiness and link rot to a system that already has too many.


Twitter considers every link, regardless of length, as 23 characters so they're not even that useful there.


I think that's precisely because they are always shortened automatically on twitter's side.


A long time ago, perhaps 6-7 years, I used an OS X app that took screenshots and put a link on the clipboard. I noticed that it used very short paths and they appeared to be sequential, so in a moment of boredom I made a little PHP script that just gave you next/previous buttons to iterate through them. It was amusing and I considered actually scraping them and then trying to OCR for sensitive information or something out of curiosity, but I never got around to it (pity, I could have scooped this article!).

Well, fast forward about two years, and that script is still sitting around on a forlorn webserver of mine. Somehow, I have no idea why, some random person ended up tweeting a link to it and it spread around a bit until the software vendor got wind of it. They ended up sending me a probably too-polite email asking if I could do something about it, and after a bit of back-and-forth I got instructions from them on how to enable more secure "long URLs" in the software (an option that I think was new since I made it, so I wonder if I may have actually inspired it...) and added those to the bottom of the page.

It's long gone now, and to be honest I can't remember which app was affected. Possibly tinygrab.

The point of this anecdote is that the problem is not at all new, and the problem of how to deal with it isn't new either. I suggested to the developer at the time that they should probably use long URLs by default, but it seems users just like those short URLs too much. Going to non-sequential assignment would have helped, but the space was still just too small.

Really, I think the fix is just communication. Microsoft's workflow used to be sensible in that OneDrive gave you a long URL and then you had to click another button to get a short one. That second click should come with a warning that there should be no sensitive information in the document, and it will potentially become public after shortening. Users will have to be trusted with the judgment, at least you've CYAd.


Speaking of short urls, am I the only one that refuses to click on them? I hate that I can't see where the link is going to take me.


I sometimes "curl -I" first to check what it redirects to before clicking. Would love a mobile app that registered as the default app for various URL shorteners and did the same.


Nice! I didn't know you could do that.


I've never heard this before. What would be an example of somewhere you don't want to go? I've never clicked a shortened URL and had unexpected results.


Most of the time it happens on Twitter. Someone will tweet a bitly link with some enticing headline but I refuse to click it because I don't know where the link is taking me. Perhaps I'm just overly cautious...


I'm surprised that no one responded to this dubious claim with a link to e.g. goatse...


I pointed this out when I was at Twitter when they were still using the normal short t.co URLs inside DMs. We quickly switch to using very long tokens for those generated URLs. To me it seems completely obvious and I struggle with the developer that stored private information in something so eminently scannable.


When you use a URL shortener, you are effectively encrypting the URL and telling someone they have to go to some 3rd party to get the plaintext. Without any checksum to verify that the 3rd party didn't send an incorrect URL, either maliciously or by accident.


Well... The article is a little dis-engenuous about the shared folder stuff. If a user selected to share the folders with anyone that has the link and also allows write into the folder publicly from anyone, then that's by design. Obscurity on the url part isn't necessarily required, and it may even be a feature to allow easy dumping.. This is on the end user to make sure they aren't auto-downloading public data that has been dumped there.. I can see why this may not be ideal from a security standpoint, and allowing data mining/unauthenticated file drops may not be a great way to handle it, but I don't think the article actually gives the full details. Unless I'm completely wrong, and there is no options in OneDrive for sharing permissions (public, select group, etc) then yes it's a security vulnerability.


"...For Cloud Services" doesn't seem like an appropriate title to me. I would say it's more like "For Cloud Service Users".


This is an article about how people are using url shorteners for the wrong reasons and/or not using security on private data.


It shouldn't be easy to misuse a shortener that is built into a service like OneDrive.


Until today I had no idea what OneDrive is, so I'll take your word for it. I would suggest that the problem is that access to the OneDrive resource starts and stops at the URL. Conventional security protocols are conventional for a reason.


"After an email exchange that lasted over two months, “Brian” informed us on August 1, 2015, that the ability to share documents via short URLs “appears by design” and “does not currently warrant an MSRC case.”

What is it with these large companies ignoring serious security issues while paying attention to smaller ones? I reported something to Facebook that was a moderate privacy concern and got a bug bounty. A few months later, I discovered that I could make Facebook falsely report the domain that a posted URL goes to, and they denied that it was even a bug. So I could share a URL on mydomain.com, customize the contents of the share posting ("Obama says he's going to nuke Russia"), and Facebook would show users in the post that the link goes to Whitehouse.gov or CNN.com or any other domain I choose. This still works perfectly.

These companies really need to take a look at the analytical abilities of those they are employing to screen bug reports.


One also needs to take into account how these larger companies' internal groups function.

"Brian" probably looked into it, knowing that obscurity != security, but got a response back from the group responsible that was the way they intended it to work, and that group's management wasn't going to do anything about it. "Brian" may have even put messages in the right ear up the management chain such that it would actually effect the outcome.

The fact that the email exchange lasted 2 months before "Brian" said "Sorry, not a case." probably means that "Brian" was trying to make it happen and had actually done an analysis.

* Note: I'm NOT Brian. I've never worked for Microsoft.


Probably some metric says that if "Bob" gets less than X cases per year he gets a bonus. Problem is "Bob" determines "Frank's" wage and "Frank" is "Brian's" boss. It is probably more complicated than that but that's what it always boils down to.


OneDrive publicly writeable? Why is that even possible?


IMO the author is conflating two separate "issues".

> TL;DR: short URLs produced by bit.ly, goo.gl, and similar services are so short that they can be scanned by brute force.

This is not the issue for OneDrive. Everyone knew this already, right?

For Google Maps, it's definitely more nuanced. I'm glad Google acted swiftly.

> Our scan discovered a large number of Microsoft OneDrive accounts with private documents. Many of these accounts are unlocked and allow anyone to inject malware that will be automatically downloaded to users’ devices.

This is the issue for OneDrive. I'm not a OneDrive user, but if the documents are publicly editable per a setting the user controls, this isn't a "vulnerability" either.


> if the documents are publicly editable per a setting the user controls, this isn't a "vulnerability" either.

These services advertise it as "editable by anybody with the link" not "editable by anybody"

There is an implication that people can't get the link without you giving it to them.


Honestly this title feels a bit like FUD. Sure restricting the space of possible URLs decreases the difficulty of brute forcing urls, but honestly if you don't want something publicly accessible put it behind a auth wall.


Yes, the title does. But let's take a look at the actual content:

OneDrive generates short URLs for documents and folders using [..] the same tokens as bit.ly. [..] In our sample scan of 100,000,000 bit.ly URLs with randomly chosen 6-character tokens [..] 19,524 URLs lead to OneDrive/SkyDrive files and folders, most of them live. [..] From the URL to a single shared document (“seed”), one can construct the root URL [from which] it is easy to automatically discover URLs of other shared files and folders in the account

In other words, the links provided by these shorteners contain authorization information. And:

Around 7% of the OneDrive folders discovered in this fashion allow writing.

And in the case of Google Maps:

goo.gl/maps URLs used 5-character tokens. Our sample random scan of these URLs yielded 23,965,718 live links, of which 10% were for maps with driving directions. These include directions to and from many sensitive locations: clinics for specific diseases (including cancer and mental diseases), addiction treatment centers, abortion providers, correctional and juvenile detention facilities, payday and car-title lenders, gentlemen’s clubs, etc. The endpoints of driving directions often contain enough information (e.g., addresses of single-family residences) to uniquely identify the individuals who requested the directions


Thanks. This was a good summary of the problem.

Do Google et al not have any kind of rate limiting which looks at suspicious behaviour like scanning lots of short URLs?


Did you read the article? At the bottom it said that they have both increased the key size to 11 or 12 characters and deployed methods for preventing the brute forcing of these URL's. I think that it's safe to assume that one of these methods is rate limiting.


Thanks. I asked specifically about rate limiting as the article didn't specifically mention it.


In other news, some people use password as their password and 123 456 as the combination to their briefcase!


The difference is when those tokens are autogenerated for the user and the user may not even know about it, let alone know about the risk.

A better analogy would be when routers ship with a default world-viewable admin UI and admin as its password.


Agreed (to an extent). The real problem with URL shorteners is their longevity. Some routinely delete old URLs and all will most certainly be gone within the next decades.


I agree that's a problem, but it's not really relevant to the article.


Also everyone should be smart enough to realize a public short URL isn't security. There are public one time password sites that provide a far more secure way to share a file quickly IMHO.


Is your grandmother "smart" enough to realize this?

I think your expectations about the "smartness" of the public are not justified. It's not actually about smartness; it's about information theory. Not everybody is as up to date as you are.


This question is a red herring IMO. Are grandmothers sharing secret information via 5 character encoded URL shorteners? Maybe so or maybe the typical person who would use a public URL shortener to share secret documents isn't a clueless grandma.


Is it the age, gender, or reproductive status of grandmothers that make them less intelligent or less 'up to date' than other people?


Like I said in the second sentence of my post, it's not about intelligence, it's about knowledge: specifically, knowledge of information theory. Grandmothers, as a general rule, are less likely to know about Shannon entropy than the population of HN readers and their milieu (hackers, it's in the name of the site).


To me, this is like saying Base64 encoding is dangerous; sure, if you think Base64 is encryption and are using it to store passwords, please stop.

Almost all tech can be used in the wrong way, this does not make the tech bad if use correctly.


The story is that products as huge as OneDrive and Google Maps use it in terrible ways.


Seems like the problem is not with the URL shorteners, but with OneDrive braindead security model. Somehow having a link to one file allows the attacker to see all the user's files? What were they thinking?


Instead of trying to make brute force more costly, couldn't these services make it impossible, by forcing the fulfilment of a CAPTCHA when trying to expand/follow a shortened URL?



I agree this could become a big issue - but I wouldn't consider it a "security vulnerability" per se.

URLs aren't secure, and shouldn't really be considered so.


I think it's less about the URL itself and more about the services which automatically generate them for content without the user being fully cognizant of what that implies/means. It has the potential to publicize information without the user's intent or knowledge; in the case of OneDrive from the article, it exposed documents with sensitive information when the URL itself wasn't even shared, it was just brute-forced. Prior to Google's changes to the URL shortening, it sounds like it was possible to get quite a bit of personal identifying information just by guessing at the shortened URL, even if the URL itself was never shared.

Even if URL shortening is a feature that users are aware exists, the consequences of it certainly aren't immediately clear, and to my knowledge, not many of these services include a way to disable the generation of a shortened link or have a means to prevent this type of scanning from happening.


There is a lot of content from "cloud" apps that has "private link" sharing functionality, where "only someone with this link can view the <object>".

No, it really isn't security. But yes, it really does happen, and probably a lot more than you'd think.


This probably can be significantly helped from a UI design perspective. As more and more services auto-shrink links for readability there may not be as much of a compelling need to shorten links. Why in 2016 must we keep displaying everything as "raw text". This is a perfect example of the power of richer displays.


As a very late adopter of twitter, I was shock when I first run into shortened URL, not understanding their values and seeing only the risks.

Interestingly, a lot of email client also replace links by their own creating similar risks, but nobody talks about those...

I have to applaud Slack for displaying and linking to links the they were meant to.


What's more worrying to me than the enumerating of all short URLs, is the directory traversal when you know one URL. This is someone I know, and have specifically shared one file with, being able to see ALL my other documents. Glad to see that's gone now.


My URL shortener makes the user come up with their own short URL so they can decide how long to make it.

http://oc9.org/<insertYourShortURLHere>


It's possible to create a redirect loop, i.e, a URL that redirects to itself. I don't know if it's something that should happen or not, but Chrome gives me an error after too many redirects.


Is anyone else getting the following JSON for all requests to 1drv.ms?

{"error":{"code":"generalException","message":"General Exception While Processing"}}


One solution [0]: Password Protected URL Shortener http://thinfi.com/

[0] - at the cost of usability


I'd still like to see delete added goo.gl


It's hard to believe so many people consider the use of shortened URLs a security measure. It is not, and was never intended to be. A URL is exposed, by definition, whether long or shortened. A shortened URL is a convenience, not a security tool. Some people misuse base64 encoding for "security" as well, but it does not mean we should get rid of base64 encoding.


Nobody was using shortened URLs with the intention of creating security (as a security measure). Rather, they were ignorant of the way shortened URLs degrade security. Different thing.


URLs aren't secrets.


Why not, if you've tried to keep it secret? They're an incredibly common means of giving out things to a limited set of people without having to explicitly authenticate them. They're used by a lot of mailing lists for e.g. unsubscribe links. If there's enough entropy and they're once-only it's a reasonable approach.

Otherwise you end up with people just sending around "go to this url with this login and password" through email and chat instead, which is slightly harder to automatically exploit but not really more secure.


simple script to brute force goog.gl urls for fun. :) https://gist.github.com/cyc115/f22db26de6a5d723ef6094a97f0ed...


This vuln probably also exists on imgur


Yeah, imgur's predictable too. Go look up "imgur roulette".


Well the URLs are predictable...

But I don't think you can find out one's imgur username or more of their photos from just having a link to one of their photos.

I also tested from an arbitrary image in an album, trying to go backwards to find out what album it belongs to, but that doesn't seem possible either.


There is nothing inherently wrong with a short URL, provided that supporting infrastructure has proper security.

Even now, if you connect to your favorite “trusted” long domain names, nothing stops that from being totally hijacked by an untrustworthy Internet service provider or other entity that has access (or more insidiously, inserting crap like ads that were not in the original source).

And heck, long URLs are suspicious as well. I’m sure by now everyone has received one of those ridiculous "important@facebook.com.kdsjfksdjfkdsfjdskfjdskfjdskfjdskfjdskfjdskfjdskfs.oopsmalware.com".

The push should be for broader adoption of mechanisms that make it hard to subvert what you download, and easy to verify what will happen when you click a link.


This article is more interesting from a UX point of view. It seems that many people think the gibberish-ly looking URL, however short, is nice and safe. That confers a false sense of privacy.

The part about automatically-generated short URLs in MS docs is (was?) worriesome. Few users understand the implication of having a public URL that directly references their document which, in all likelihood, were intended for a restricted audience.

I should point out that Slack has the same bug, though their URLs are simply obfuscated with a longer token. Shameless plug: Cisco Spark[0] has solved that with end-to-end encryption.

[0] https://support.ciscospark.com/customer/en/portal/articles/1...


Sounds like the web-office tools need a "partially public" option. I mean, I want to do whatever, whenever to my private documents, but when I open a document for public editing because my collaborators don't have a google account and so I just share out a long-URL with them, the expectation is that the worst thing that coudl happen is that a malicious actor that finds the URL could mangle the document and I'd have to revert it.

Giving my collaborators enough power to inject executables is far beyond my needs and my intent when I make the doc "open". At worst I'd expect to find a document edited with a link to an external malware exe, not some horrifying autorun problem.

You could also do warnings when the user clicks a link ina publicly editable doc "this document is publicly editable, which means that any rando on the internet might have set this link, not just your buddy who made the doc. Are you really sure you want to go there?"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: