Hacker News new | comments | show | ask | jobs | submit login
Two URLs are enough for everyone (ustunozgur.com)
10 points by ludwigvan 736 days ago | hide | past | web | favorite | 10 comments



> How would you explain [complicated query params] to a nontechnical person?

1) A lot of people actually get what ?foo=bar means in a URL. Especially when it is well named (e.g. "&page=3"). Is it really necessary to make URLs overly complicated?

2) Why would anybody - with or without a technical background - write a complicated query UR by hand? That's what <form> is for.

3) The entire premise is based on the "poor non-technical user"... who is writing a query string for an API? This is a complete straw man.

> Two endpoints is good enough for most people for web application programming.

"web application"? The example was a (fancy) search form to retrieve articles. That's not a "web application" in any way. Each article should have it's own URL, and search (even complicated search, progressively enhanced to dynamically load the results) was a solved problem a decade ago.

It is inappropriate to render this on the client, and trying to limit to two public URLs screams that this is a cheap attempt to defeat deep linking and create yet a walled garden. Stop trying to make the web back into TV.

> You can play with it and the StarWars API

An entirely broken demo. Sending an empty body tag is not useful page. There isn't even fallback text, so it just looks broken when the incorrect assumption that "everybody has javascript" fails.


Hi there, thanks for the comment. Author here.

No, poor technical user is not the entire premise. What I was trying to convey is to give the reader a chance to reevaluate the decision to send parameters that way, rather than trying to accept it as it is. It is hard for experienced people to think that way because they have become accustomed, but most of the time, in programming, we don't realize simpler solutions are possible. It helps to reevaluate from the eyes of a beginner.

Rich Hickey has a great talk on this called Simple Made Easy: http://www.infoq.com/presentations/Simple-Made-Easy

> web application

The blog post mentions getting an article, but basically the web applications nowadays move complexity to the client and see the server as a single API. Having that as a single URL is the natural derivation of that. Any request queries or mutations are sent to that URL.

I have updated the demo link so that it now starts with a real query, rather than an empty page. See an example at: http://bit.ly/1Qa4h00


> we don't realize simpler solutions are possible

I'm not seeing a "simpler solution" - your URL is far more complex, and is probably even harder to parse by people that have learned how URLs work. Making non-technical people learn yet another new way to do things isn't helping.

Also, at some point, you're just going to have a complicated interface. T

> move complexity to the client

It's not your computer, so you don't get to decide how the client handles the page. If you want your content to be read, try actually sending it.

Note that this is a statement of fact, not an opinion about how I wish computers worked. You do not know what the client is doing when it renders a page (adblocking is a common example), so moving complexity to the client unnecessarily is risky. So far I'm still only seeing a search interface, which is (by definition) purely server side.

> sent to that URL

Ok, I think I get what you're excited about: you're reinventing #respond_to/#respond_with[1], so the URL can be reused for different mime-types.

[1] http://edgeapi.rubyonrails.org/classes/ActionController/Resp...

> rather than an empty page.

(by the way - curl complains about that URL. Something about bracket? It may be some advanced feature of curl? No matter, wget is fine)

    $ wget -O /tmp/page.html 'http://graphql-swapi.parseapp.com/?query=%23%20Welc ... %0A}'
    $ </tmp/page.html sed -ne '/<body>/,/<\/body>/ p' | sed -e '/<script>/,/<\/script>/ d'
    <body>
    </body>
It's still an empty page.


Try this:

    curl 'http://graphql-swapi.parseapp.com' \
      -H 'content-type: application/json' \
      --data-binary '{"query":"{ allFilms(first: 3) {    films {   title, director  } }}"}'

The query param is just for easy sharing online when you build a query.


> The query param is just for easy sharing online when you build a query.

Gee, if only there were a way to encode that data into the URL itself without embedding an almost-JSON document! Someone should invent something like that.


Get outta here, that's nuts.


> It is functions all the way down

I came to approximately this conclusion a year or so ago, on my personal road of HTTP experience. I didn't even realize it was related to HTTP at the time: I wanted an RPC protocol that could do various things: It's just function calls, right? The thing was, the more and more I added features to it, the more and more it started to resemble HTTP. I think the final straw was caching: I needed a way to identify the resource to be cached, and eventually realized that identifier was a URL, and that the whole thing was essentially HTTP if I change from "function calls" to "resources".

> The server receives two strings: the path and the query params (or post body as string). The first string is parsed to get the function to call and some parameters to be passed to that. The second string is parsed to get the additional parameters to be passed to that function.

> We have been accustomed to getting JSON responses from the server. Isn’t it time to stop sending strings to the server and start sending a structured data format?

To some extent, the data is structured. The path is hierarchical (though most web frameworks/servers I've interacted with do a crappy job of exposing that). With regards to the query string and POST body, I feel like the article is muddling the encoded (string) form and the decoded form. We have to encode to a string to transmit: it's going over TCP, or a wire even, at some point. But it's perfectly fine to put JSON data in a POST body. Even the default, x-www-form-urlencoded, is "structured" in the sense that it's key-value pairs, though perhaps less "structured" than JSON. (It has a well defined form, and a way to parse it, though it isn't as rich in what it encodes as JSON.) Near as I know, the query string can be JSON too; it just typically isn't by convention.

I'm curious how this supposed "two URL" approach would attempt caching.

I think the newly drafted SEARCH method[1] (intended for search requests; carries a body) would suit the example in the post fairly well. The body on that request could be GraphQL. (And today, if you really need a odd request like this with a body… there's POST.)

[1]: https://tools.ietf.org/html/draft-snell-search-method-00


If anything URLs aren't used enough in SPAs. They can help users share a state, for example, the results in the page when a certain filter is applied. Even data-intensive web applications can benefit from sharing and embedding. And everyone, including my parents know how to share a link.


I like the idea of one endpoint to get data from/to. But I fail to see how is that intrinsically related to URL's or SPA's.

The title makes it sound like you wanna get rid of URL's sort of like safari. Which I think is a bad bad idea.

I'm also not sold on the idea of "let's use SPA's for everything!".


So a website is a house. A house has articles, users, and two URLs. Does this mean, that a house is a website? I imagine how you get home from work, open the url of your house. But I can't really imagine, what happens next. Can you help me out? Where do you take a nap, for example? Where do you cook? best regards




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: