

Ask HN: Why would anyone POST anything in AJAX? - mlLK

Bear with me here because I'm bound to take you off tangent, but as I've been developing more and more server-side oriented applications for HTTP I find myself pondering the advantages and disadvantages of implementation; so this may come across as an answer rather than a question. But wasn't AJAX created as a work-around for POST? Correct me if I'm wrong but POST is primarily a method for the server not the client, while GET behaves more like the web today (2.0) in that users spend more time requesting stuff (an xhr that passes some value who in turn returns with a response) from a server rather than writing or executing stuff on that server.<p>If anything AJAX seems like it was made especially for GET to cut-down on how much and what was being passed with POST...I might be way off, but does it not seem contradictory to POST something in AJAX without returning a response?
======
pmjordan
There are numerous implications of choosing POST or GET.

First off, the HTTP specification requires that GET (and HEAD) requests be
idempotent^W EDIT: _side effect free_ to enable caching. Clearly, violations
of this abound; however, proxy servers and other infrastructure will often
hold you to this rule, so make sure you know what you're doing. (proxies and
clients are permitted to withhold or arbitrarily repeat GET requests without
changing the outcome)

Second, security: although AJAX in the strictest sense falls under the same-
origin policy, you can quite easily make GET requests to foreign hosts in
JavaScript by creating SCRIPT, IMG or IFRAME elements with appropriate SRC
attributes, whereas you'll struggle to create a foreign-host POST.
Inappropriate use of GET is therefore the enabler of XSRF attacks. CAUTION:
read petewarden's reply regarding XSRF attacks via POST forms.

Third, data size, as others have said. URLs are notoriously problematic for
conveying any information other than short text.

Fourth, if web crawlers get hold of any URLs which exhibit side effects on
GET, they will wreak havoc on your site. (though this can admittedly be
alleviated by judicious use of robots.txt)

Finally, you seem to imply that POST requests will not or cannot generate a
response similar to GET. This is not the case, responses to POST work in the
same way. (except it must not be cached) The deciding factor is the nature of
the _request_ (side effect or no side effect), not the nature of the response.

~~~
petewarden
With regards to point #2, it's definitely inconvenient to cross-site POST from
the client side, but not hard: Create a form with a target pointing to the
external destination and submit it via Javascript, wrap it inside an invisible
frame for neatness.

Don't let POST give you a false sense of security; checking the referrer can
help in this particular case (it's tricky to forge the referrer from the
client side) but you really need a robust check like a generated token to
avoid malicious cross-site requests.

~~~
jeremyawon
for what it might be worth to others:

i've generally found that the client already has a cookie-session established
in situations where cross-site ajax calls are a concern. so, my approach is to
require a hash of <request text>+<session id cookie> as a per-call signature.
javascript on another domain can't access the session id cookie and so won't
be able to generate this signature.

~~~
anamax
> so, my approach is to require a hash of <request text>+<session id cookie>
> as a per-call signature. javascript on another domain can't access the
> session id cookie and so won't be able to generate this signature.

If the security relies on other javascript not being able to access the
session cookie, why is it insecure to use said session cookie by itself as the
per-call signature?

Who can see the signature but not the session id cookie?

~~~
jeremyawon
you're right, hashing is overkill.

~~~
pmjordan
No, it isn't, as the cookie is sent with any request to your domain,
regardless of the source. The hashing uses the fact that the cookie is
inaccessible to JavaScript running in other domain contexts.

~~~
jeremyawon
the sid is inaccessible to js running in another domain context, so anamax is
pointing that it suffices to just retransmit the sid as one of the request
parameters, rather than generating and including sig=hash(request+sid).

you're right that the remotely invoked request will include the sid as part of
the http cookies header - it's what goes into the get/post request parameters
which differentiates the domain context of the invoker. and if someone can't
generate and include a sig because they can't access the sid, then the
challenge might as well just be including the sid.

the only draw back i see is that i wouldn't have had an excuse to learn how to
implement sha ;)

*edit: jim_lawless points out below that you might not want a sid showing up in web server logs. my system frequently rotates the clients sid, so it's not a concern to me - but if you do use less transient sid cookies you might want to implement the hash signature approach after all?

------
aristus
Not at all. Imagine a document editor. There are limits to the amount of data
that can be passed via GET URLs. Without POST you could only have 2KB
documents. In Archivd we use POST via Ajax anywhere the user mutates data. It
avoids the data length problem and makes it simpler to defend against CSRF
attacks.

~~~
mlLK
_Imagine a document editor._

For the web? Passing the entire document asynchronously?

 _In Archivd we use POST via Ajax anywhere the user mutates data._

See, I'm still new to AJAX and from the type of apps that I use and experiment
with are generally broken down as independent events who pass values and who
perform an xhr as each event is triggered. I guess as your app grows and the
data your handling using this methodology (as GET) is suddenly just too
complex to carry out on the client. Thanks for the input though.

~~~
boucher
280 Slides uses an asynchronous POST (or PUT) request for saving documents.

Just because the mechanism for generating requests has changed, that doesn't
mean the mechanics of REST should be thrown out the window.

REST is useful, and its how most of the infrastructure of the web works. It's
useful to understand this, and try to build your applications appropriately.

------
cperciva
Many systems have limits on the maximum length of a GET query string, since
said strings form part of a URL. In most cases these limits are quite large
(4kB or more) but I've seen systems which can't even handle 1kB URLs.

If you want to send a large amount of form data and not have things randomly
break, it's a good idea to use POST instead of GET.

~~~
jamroom
Internet Explorer has problems with URLs over 2047 characters in length - if
you are needing to handle large amounts of text (say from a textarea), POST is
the way to go.

------
jaxn
In addition to request size limits, using the HTTP methods on server side code
can simplify code. If I GET a person record then send it down, if I POST a
person record then process the passed parameters. With that separation I may
find that POST requests do not require sending any data to the client (other
than a success indicator). That reduces bandwidth and improves client response
time.

I have really grown to appreciate this simplification while working with webpy
on AppEngine. I think that my request handlers are about the cleanest they
have ever been.

------
gills
I use ajax POSTs for autosaves, creating and deleting 'child' objects (when
the UI for the sub-object should be added to/removed from the page without a
reload). Basically any mutation where I don't want a reload.

------
jim_lawless
GET parameters can ( and often do ) show up in web-server logs. Transmitting
sensitive data via a GET ( even over SSL ) may ultimately cause security /
privacy concerns.

------
cnlwsu
POST easier to protect from XSS attacks when using the Django middleware.

