
Amazon S3 will no longer support path-style API requests - cyanbane
https://forums.aws.amazon.com/ann.jspa?annID=6776
======
samat
One important implication is that collateral freedom techniques [1] using
Amazon S3 will no longer work.

To put it simply, right now I could put some stuff not liked by Russian or
Chinese government (maybe entire website) and give a direct s3 link to
[https://](https://) s3 .amazonaws.com/mywebsite/index.html. Because it's
https — there is no way man in the middle knows what people read on
s3.amazonaws.com. With this change — dictators see my domain name and block
requests to it right away.

I don't know if they did it on purpose or just forgot about those who are less
fortunate in regards to access to information, but this is a sad development.

This censorship circumvention technique is actively used in the wild and
loosing Amazon is no good.

1
[https://en.wikipedia.org/wiki/Collateral_freedom](https://en.wikipedia.org/wiki/Collateral_freedom)

~~~
xiaq
You cannot solve a political problem with a technical solution.

~~~
geofft
Sure you can. Weapons research is a very common counterexample; people have
been solving political problems with technical solutions ranging from
sharpening spear-heads to achieving nuclear chain reactions.

(Of course "who is politically right" and "who has the most technical
expertise on their side" are at best tenuously related, but that's a different
and longstanding problem. If you believe you're politically right and you have
technical expertise on your side, use it.)

~~~
thereare5lights
Are there any non-violent technical solutions? I think we all know that's what
that person really meant.

~~~
prepend
Radio Free Europe is a technical, non-violent solution to a political problem.

Viagra solved tiger poaching political problem.

------
btown
What kind of company deprecates a URL format that's still recommended by the
Object URL in the S3 Management Console?

[https://www.dropbox.com/s/zzr3r1nvmx6ekct/Screenshot%202019-...](https://www.dropbox.com/s/zzr3r1nvmx6ekct/Screenshot%202019-05-03%2019.32.48.png?dl=0)

There are so, SO many teams that use S3 for static assets, make sure it's
public, and copy that Object URL. We've done this at my company, and I've seen
these types of links in many of our partners' CSS files. These links may also
be stored deep in databases, or even embedded in Markdown in databases.

This will quite literally cause a Y2K-level event, and since all that traffic
will still head to S3's servers, it won't even solve any of their routing
problems.

Set it as a policy for new buckets, if you must, if you change the Object URL
output and have a giant disclaimer.

But don't. Freaking. Break. The. Web.

~~~
EugeneOZ
Also in millions of manuals, generated PDFs, sent emails... Some things you
just can't "update" anymore.. It's really disastrous change for the web data
integrity.

~~~
hueving
One of the magicians in Las Vegas (the one at the MGM) even used s3 image
links in emails to send emails to everyone "predicting" the contents of
something that hadn't happened.

~~~
paulddraper
David Copperfield does that.

------
astrocat
Amazon explicitly recommends naming buckets like "example.com" and
"www.example.com" : [https://docs.aws.amazon.com/AmazonS3/latest/dev/website-
host...](https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-
custom-domain-walkthrough.html)

Now, it seems, this is a big problem. V2 resource requests will look like
this:
[https://example.com.s3.amazonaws.com/..](https://example.com.s3.amazonaws.com/..).
or
[https://www.example.com.s3.amazonaws.com/..](https://www.example.com.s3.amazonaws.com/..).

And, of course, this ruins https. Amazon has you covered for *
.s3.amazonaws.com, but not for * .* .s3.amazonaws.com or even * .* .*
.s3.amazonaws... and so on.

So... I guess I have to rename/move all my buckets now? Ugh.

~~~
BillinghamJ
The point of that is solely for doing website hosting with S3 though - where
you'll have a CNAME. Why would you name a bucket that way if you're not using
it for the website hosting feature?

~~~
the_mitsuhiko
It also comes up when working with buckets of others. Right now if you build a
service that is supposed to fetch from a user supplied s3 bucket the path
access was the safest.

Now one would need to hook the cert validation and ignore dots which can be
quite tricky because deeply hidden in an ssl layer.

~~~
geofft
How does the S3 CLI handle this? Do they hook cert validation? (I assume they
_must_ actually validate HTTPS...)

~~~
the_mitsuhiko
Pretty sure you get a cert error or they still use paths. Boto (what it’s
build on) has an open issue for this for a few years now.

------
TheLoneTechNerd
Does anyone have insight on why they're making this change? All they say in
this post is "In our effort to continuously improve customer experience". From
my point of view as a customer, I don't really see an experiential difference
between a subdomain style and a path style - one's a ".", the other's a "/" \-
but I imagine there's a good reason for the change.

~~~
BillinghamJ
Three reasons -

First to allow them to shard more effectively. With different subdomains, they
can route requests to various different servers with DNS.

Second, it allows them to route you directly to the correct region the bucket
lives in, rather than having to accept you in any region and re-route.

Third, to ensure proper separation between websites by making sure their
origins are separate. This is less AWS's direct concern and more of a best
practice, but doesn't hurt.

I'd say #2 is probably the key reason and perhaps #1 to a lesser extent.
Actively costs them money to have to proxy the traffic along.

~~~
peterwwillis
So "improving customer experience" is really Amazon speak for "saving us
money"

~~~
BillinghamJ
Makes it faster, reduces complexity and would allow them to reduce prices too

~~~
dredmorbius
Pricing is set by markets based on competitors' offerings. Reduced costs could
simply result in monopoly rents.

------
sl1ck731
Does the "you are no longer logged in" screen not infuriate anyone besides me?
There doesn't seem any purpose to it just redirecting you to the landing page
when you were trying to access a forum post that doesn't even require you be
logged in.

Absolutely mind boggling with as much as they pay people they do something so
stupid and haven't changed it after so long.

------
cddotdotslash
This is going to break so many legacy codebases in ways I can't even imagine.

Edit: Could they have found a better place to announce this than a forum post?

~~~
Rexxar
Couldn't they do a redirection (301) to not break code ?

~~~
notmyname
No, because path-style bucket names weren't originally required to conform to
dns naming limitations. I don't know how they're going to migrate those older
non-conforming buckets to the host-style form.

------
jasonkester
I wonder how they’ll handle capitalized bucket names. This seems like it will
break that.

S3 has been around a long time, and they made some decisions early on that
they realised wouldn’t scale, so they reversed them. This v1 vs v2 url thing
is one of them.

But another was letting you have “BucketName” and “bucketname” as two distinct
buckets. You can’t name them like that today, but you could at first, and they
still work (and are in conflict under v2 naming).

Amazons own docs explain that you still need to use the old v1 scheme for
capitalized names, as well as names containing certain special characters.

It’d be a shame if they just tossed all those old buckets in the trash by
leaving them inaccessible.

All in, this seems like another silly, unnecessary, depreciation of an API
that was working perfectly well. A trend I’m noticing more often these days.

Shame.

------
euank
One of the weird peculiarities of path-style API requests was that it meant
CORS headers meant nothing for any bucket pretty much. I wrote a post about
this a bit ago [0].

I guess after this change, the cors configuration will finally do something!

On the flip side, anyone who wants to list buckets entirely from the client-
side javascript sdk won't be able to anymore unless Amazon also modifies cors
headers on the API endpoint further after disabling path-style requests.

[0]: [https://euank.com/2018/11/12/s3-cors-
pfffff.html](https://euank.com/2018/11/12/s3-cors-pfffff.html)

------
chillaxtian
A similar removal is coming in just 2 months for V2 signatures:
[https://forums.aws.amazon.com/ann.jspa?annID=5816](https://forums.aws.amazon.com/ann.jspa?annID=5816)

This could be just as disruptive.

Difficult to say that they will actually follow through, as the only mention
of this date is in the random forum post I linked.

~~~
scrollaway
Doubtful, sigv2 is not supported in all regions. So all current software that
wants to be compatible with more than a portion of regions has to be
compatible.

This is a great way of introducing breaking changes. Imagine that for example,
ipv6 would be at near 100% adoption if "new websites" were only available over
v6.

------
ec109685
Amazon is proud that they never break backwards compatibility like this.
Quotes like the container you are running on Fargate will keep running 10
years from now.

Something weird is going on if they don’t keep path style domains working for
existing buckets.

~~~
quickthrower2
Only 10 years. Shame that that is a boast. 100 years would be better.

------
sly010
Is there a deprecation announcement that does not include the phrase "In our
effort to continuously improve customer experience"?

Edit: autotypo

------
bagels
Fun fact: The s3 console as of right now still shows v1 urls when you look at
the overview page for a key/file.

------
reilly3000
I was already planning a move to GCP, but this certainly helps. Now that cloud
is beating retail in earnings, the ‘optimizations’ come along with it. That
and BigQuery is an amazing tool.

It’s not like I’m super outraged that they would change their API, the
reasoning seems sound. It’s just that if I have to touch S3 paths everywhere I
may as well move them elsewhere to gain some synergies with GCP services. I
would think twice if I were heavy up on IAM roles and S3 Lambda triggers, but
that isn’t the case.

------
manigandham
This is most likely to help mitigate the domain being abused for browser
security due to the same-origin policy. This is very common when dealing with
malware, phishing, and errant JS files.

------
lazyant
`In our effort to continuously improve customer experience` , what's the
actual driver here, I don't see how going from two to one option and forcing
you to change if you are in the wrong one improves my experience.

~~~
thaumasiotes
[http://chainsawsuit.com/comic/2017/12/07/improvements/](http://chainsawsuit.com/comic/2017/12/07/improvements/)

> We asked our investors and they said you're very excited about it being less
> good, which is great news for you!

------
geekrax
There are millions of results for
"[https://s3.amazonaws.com/"](https://s3.amazonaws.com/") on GitHub:
[http://bit.ly/2GUVjDi](http://bit.ly/2GUVjDi)

~~~
BillinghamJ
GitHub search is really poor. It is also including the uses of the subdomain
style.

~~~
geekrax
Agreed. The search could use some love for doing exact match.

The scale at which different libraries, tools, and systems depending on hard-
coded S3 urls will break by this change is insane.

------
merb
I see a problem when using the s3 library to other services that support s3
but only have some kind of path style access like minio or ceph with no
subdomains enabled. it will break once their java api removes the old code.

------
pulkitsh1234

        ag -o 'https?://s3.amazonaws.com.*?\/.*?\/'| awk -F':' '{print $1, $4}' | sort | uniq | cut -d'/' -f 1 | sort | uniq -c | gsort -h -rk1,1
    

For anyone interesting in finding out the occurrences in their codebase. (Mac)

------
Roark66
AWS API is an inconsistent mess. If you don't believe me try writing a script
to tag resources. Every resource type requires using different way to identify
it, different way to pass the tags etc. You're pretty much required to write
different code to handle each resource type.

------
mark242
This will hopefully prevent malicious sites hosted on v1-style buckets from
stealing cookies/localstorage/credentials/etc.

~~~
judge2020
Care to elaborate? Why would there be any secrets stored via s3.amazonaws.com?

------
caseymarquis
I'm so glad I saw this. I would have been very confused when this went live
had I not seen this post today. I wish I could upvote this more.

------
phlakaton
Hm. I had a local testing setup using an S3 standin service from localstack
and a Docker Compose cluster, and path-style addressing made that pretty easy
to set up. Anyone else in that "bucket?" Suggestions on the best workaround?

------
swiley
Commercial platform breaks things people have built on it for "the sake of
continuously improving customer experience. "

Also: see photos of your favorite celebrity walking their dog and other news
at 11.

------
segmondy
So much for customer obsession.

~~~
jhall1468
I don't think "never change" is customer obsession. Improving products is
customer obsession.

~~~
notatoad
this goes beyond "never change". never changing your product is a bad thing,
but never changing your URLs is a mantra everybody should live by.

------
orf
[https://github.com/search?q=%22https%3A%2F%2Fs3.amazonaws.co...](https://github.com/search?q=%22https%3A%2F%2Fs3.amazonaws.com%2F%22&type=Code)

Over a million results (+250k http). This is going to be painful.

------
ARandomerDude
TL;DR

Migrate

from: s3.amazonaws.com/<bucketname>/key

to: <bucketname>.s3.amazonaws.com/key

no later than: September 30th, 2020

------
pronoiac
For other folks looking for announcement feeds, see
[https://forums.aws.amazon.com/rss.jspa](https://forums.aws.amazon.com/rss.jspa)
\- announcements are the asterisks.

------
rynop
How does this impact CloudFront origin domain names? I have an s3 bucket as a
CF origin and the format the AWS CF Console auto-completes to is:

<bucket>.s3.amazonaws.com

Do I need to change my origin to be, Origin domain name: s3.amazonaws.com,
Origin Path: <bucket>

This is a sneaky one that will bite lots of folks as it is NOT clear.

~~~
bagels
I think you have this backwards.

<bucket>.s3.amazonaws.com is the V2 url formula.

------
yeahitslikethat
"In our effort to continuously improve customer experience, the path-style
naming convention is being retired in favor of virtual-hosted style request
format. Customers should update their applications"

How does forcing customers to rewrite their code to confirm to this change,
improve customer experience?

~~~
blantonl
Maybe as the technical debt continues to come due with the current
architecture, it's time to make the hard choices to keep a good customer
experience?

~~~
yeahitslikethat
It's on amazon to pay off their technical debt. Not their customers. They are
turning off a feature at their customers expense.

That's the exact opposite of good customer service.

------
jasonpeacock
IMO, this is an improvement - it makes it clear that the bucket is global and
public, whereas with the path you could believe that it was only visible when
logged into your account.

It also helps people understand why the bucket name is restricted in it's
naming.

~~~
ceejayoz
> it makes it clear that the bucket is global and public

How does it do that? You can host a private bucket at foo.s3.amazonaws.com
just fine.

~~~
geofft
I think the claim is that the _namespace_ is global and public, i.e., you and
I can't both have buckets named "foo". There is only one S3 bucket named "foo"
in the world.

If it's [https://s3.amazonaws.com/foo/](https://s3.amazonaws.com/foo/) you
could believe that it's based on your cookies or something, but if it's
[https://foo.s3.amazonaws.com/](https://foo.s3.amazonaws.com/) it's more
obvious that it's a global namespace in the same way DNS domain names are (and
that it's possible to tell if a name is already in use by someone else, too).

------
xyzzy_plugh
This will break software updates for so many systems, probably even some
Amazon devices.

------
miguelmota
Always confused me how they had two different ways of retrieving the same
object. Glad that they're sticking to the subdomain option. Sucks to go back
and check for old urls though. This change might break a good chunk of the
web.

------
niyazpk
One way to do this without breaking existing applications would be to charge
more for the path style requests for a while. Then deprecate once enough
people have moved away from it, so that less people are outraged by the
change.

------
interfixus
> _In our effort to continuously improve customer experience,_ [feature x] _is
> being retired_

In this case, the most highly improved experience I can think of eould be that
of sundry nefarious entities monitoring internet traffic.

------
jvarsanik
Does anyone know if this will affect uploads? We are getting an upload URL
using s3.createPresignedPost and this returns (at least currently) a path-
style url...

------
tckr
The title is misleading. Path style request "/foo/bar/file.ext" are still
supported.

What changes is that the bucket name must be in the hostname.

~~~
k__
Path style can be used in hostnames?

------
ajcodez
I switched to MinIO for anything new. Happy user -
[https://min.io/](https://min.io/)

------
abra559
this looks to be largely resolved:
[https://aws.amazon.com/blogs/aws/amazon-s3-path-
deprecation-...](https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-
plan-the-rest-of-the-story/)

------
massung
Anyone know if this will affect the internal use (e.g. EMR) s3 schema:
s3://bucket/path/key?

~~~
electrum
No. The file system implementation uses the AWS S3 client, which will
automatically use virtual-host style when possible (if the bucket name
supports it).

------
gigatexal
Hmm I don’t understand why this change is happening. What does this gain?
Removal of tech debt?

------
cs02rm0
I didn't know path style was possible.

I'd have found it really useful. :-/

------
iamgopal
They should produce a free redirect service at least.

------
RocketSyntax
Boo. Now old packages won't work.

------
etxm
This is going to be the Y2K of September 2020.

------
blairanderson
TL;DR
[https://news.ycombinator.com/item?id=19821813](https://news.ycombinator.com/item?id=19821813)

------
gcb0
Does that mean people still have tons of public-by-mistake s3 buckets because
of their clumsy UI, and they just gave up and are swiping what's left under
the rug?

~~~
scarface74
You really have to try to make a bucket public. Even when you do, you get
warnings within the UI, and there is a column showing you it’s public.

Is there even a UI option to make a bucket public anymore? I always edit the
bucket policy and add the JSON to make it public read only.

~~~
ceejayoz
A lot of that is a fairly recent development, though. Lots of buckets laying
around from the years it largely didn’t warn you.

------
blantonl
I'm kind of shocked at some of the responses here... everything from outrage,
to expressing dismay at how many things could break, to how hard this is to
fix, to accusing Amazon of all kinds of nefarious things.

How hard is it for 99% of the developers and technical leaders here to search
your codebase for s3.amazonaws.com and update your links in the next _18
months_?

~~~
fpgaminer
> How hard is it for 99% of the developers and technical leaders here to
> search your codebase for s3.amazonaws.com and update your links in the next
> 18 months?

I've got a number of hobby projects, some hosted on AWS, that I built ages
ago. I have no idea how this change will effect those projects because ... I
just frankly don't remember the codebases. I built them on a weekend, set them
up, and now just use them.

It isn't the end of the world. But I'm not really excited about having to dig
up old code, re-grok it, and fix anything that changes like these might break.

I suppose that's just the nature of a developer's life. But I think many of us
long for a "write once, run forever" world. Horror stories about legacy
software aside, it was nice to be able to write software for Windows and then
have it work a decade later.

~~~
blantonl
_I suppose that 's just the nature of a developer's life. But I think many of
us long for a "write once, run forever" world._

Well, I think AWS developers are in the same boat right? Here we are.

An architectural decision that many years ago was the approach now needs to be
rethought and updated.

~~~
paulddraper
How is that the same boat? Seems like the opposite.

