Hacker News new | past | comments | ask | show | jobs | submit login
If you have REST, why use XML-RPC? (joncanady.com)
38 points by generalk on Jan 14, 2010 | hide | past | web | favorite | 58 comments



It's more work to write a client library for XML-RPC/SOAP, but a single one of those libraries is likely to work with any such service you encounter. Most users never have to write one themselves.

On the other hand, it's easy to write an HTTP "REST" client library. But it seems that everyone has a different idea of what REST is. Amazon's implementation of REST isn't even close to the generally-accepted definition, for instance. Indeed, you need (to write) a separate client module for every service you use, and that's where REST fails.

Because it's so easy to write a REST service, it seems more like REST is an excuse for lazy server writers to get by.


SOAP an its ilk seem the easiest solution at first glance. Say I've got a server that can blat a frobnitz, what could be simpler than supplying a pair of functions?

  int[] GetBlattableFrobnitzIDs();
  void BlatFrobnitz(int frobnitzId);
Two functions that just magically appear when you point your IDE at a server and tell it to automatically generate a proxy class. Simple!

What's the REST alternative? Getting the list of IDs is okay, but how do you give a "blat" instruction? Until you've experienced REST in anger, its not obvious what you need to do.

Go beyond superficial uses and REST shines, but a beginner used to calling library functions might not see it.


Go beyond superficial uses and REST shines, but a beginner used to calling library functions might not see it.

I'm fairly certain this is the reason that XML-RPC/SOAP has succeeded in the enterprise space and REST hasn't.


If I'm operating a Frobnitz service and my documentation doesn't indicate how to Blat a given Frobnitz, then the failing isn't my communication layer, it's my documentation.


No! REST begs you not to have any separate documentation that is mandatory to understand your API.

The response from the Frobnitz resource should contain hypertext that indicates how to Blat it: at least the method and url.


Both are good! Documentation helps ease users in, while proper hyperlinks make the service discoverable.


In my little thought experiment, I'm implementing the interface to the blatting service.


I loathe SOAP, but it may just be because Python doesn't support it well. In your example it seems like the obvious thing would be to make a POST request to /blat-frobnitz.


https://fedorahosted.org/suds/ Suds is the only decent Python module for SOAP that I've seen. I've never been able to get ZSI or others to work very well.


Agreed, but I ran into some issues with Suds the last time I used it and ended up having to make a patch. I'm not familiar enough with either Suds or SOAP to know who's fault it was, but the whole experience still took way too long.

I'm sure it's great stuff if you're on Java or .NET and have an IDE do all the magic for you, but SOAP is really ugly from the outside.


I would put the frobnitz ID into the blat queue:

PUT http://my-fake-server.com/blat_q/2452352


I don't think you're doing that right.

PUT is idempotent, so it should only be used to update an entire resource -- no creating (POST), no partial updates (PATCH).

It would be best to have a URL linked to in a response somewhere that you can POST to to add to the queue, which responds with a 201 or 303.


A lot of questions are of the form "XML-RPC does X, but it isn't clear how to do X in REST", and the answer to most of those questions should be "You wouldn't design the system to do X when taking the REST approach."

Think of it this way: XML-RPC is structured programming, which is all about the subroutines you call, the data you pass to them, and the data you get back. REST is object oriented programming, which is all about the data and the methods you use to interact with the data.

In REST, a type of resource is like a class, a specific resource is like an object, and all objects have some/all of the same basic methods: OPTIONS, HEAD, GET, PUT, POST, DELETE. These methods always have the same purposes:

    OPTIONS: which methods does this resource support
    HEAD: what is this resources metadata
    GET: get a representation of the resource
    PUT: store a new representation of the resource
    POST: create a new resource and return its URI
    DELETE: get rid of the resource
The HTTP standard defines the responses to all of these methods, except for the specific representations. The defined responses include the status codes to return, the resource metadata (various HTTP headers), and how to specify the Content-Type header which identifies the representation being returned. The standard also specifies how the client should tell the service which representations it knows how to handle, in order to negotiate and find a useful content-type.

Where REST services differ are:

   1) What resources are available
   2) What are the representations for each resource
   3) What additional arguments are supported for each method and representation. (Query parameters for HEAD and GET, content body for POST and PUT.)
Another typical pattern for REST services is that, for every resource type, there is also a collection resource. To create a new resource, you PUT to a specific resource URI, or you POST to the container. Which you choose to implement as a service designer depends upon who's responsible for deciding the URI for the new resource; if the client decides use PUT, and if the service decides use POST.

One commenter asks (paraphrasing) "How do I use REST to call a function and pass an array as the third argument, and get back a sensible error if I pass a hash instead?". My response is that you probably wouldn't be doing something like that with a REST service. It's difficult to be more specific without a concrete example, but right from the top I'd say you're not calling functions in REST, and you need to invert your thinking in order to switch between XML-RPC mode and REST mode. (Similar to the way you can never really work well with OOP as long as you're thinking in terms of subroutines rather than classes and objects.)


This is why some of us still prefer to use XML-RPC. We just want to make an API, and don't need any of the other baggage, especially if we can no longer map our existing software design to a REST approach.

At least with real object-oriented programming you can come up with your own method names.


    Python 2.4.3 (#1,...)
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import xmlrpclib
    >>> server = xmlrpclib.ServerProxy("http://www.something.invalid/xmlrpc")
    >>> server.namespace.function(True, ["A", "B"], {"q": [1, 2, 3]})
    {'A': 8, 'B': ['hello', 8]}
"REST" can't match that. "REST + some other stuff I layer on top" can, but you have to bring more to the party; for instance, you can decide that everything will be JSON coming back, but you had to decide that, so you had to do the work. On the other hand as is so often the case with this increased responsibility you get more power; XML-RPC has no decent binary capabilities, for instance, which is easy with REST.

That doesn't mean the work isn't worthwhile. My personal feeling is that if XML-RPC meets your needs, you should probably use it, but if it doesn't, don't hesitate to move on. RPC is a crazy and cruel world, though; here there be dragons. REST is better than starting from scratch in most ways but you can still end up down the rabbit hole surprisingly quickly. For instance, REST tempts you into considering caching in a layer designed for caching human document accesses; this may or may not actually have the semantics you are interested in and it can get subtle.


Caching has been the main reason I have chosen REST over XML-RPC in certain instances - the ability to use existing web proxy and caching solutions to cache an API is certainly convenient. However, I completely agree about your last sentence here. Cache invalidation is tricky, since there isn't a standard HTTP method to tell a web cache to forget something it has cached. Also, client-side caching at a higher level using something like memcached has worked demonstrably better for me in many instances, and it offers more fine-tuned control over lifetime and expiration.


I'm confused about your Python example and how REST can't achieve that.

  bash$ curl http://my-fake-service.com/people/1.json
  {name: "John Doe", address: "123 Fake St"}
I think opposite you, however. My feeling is that REST will probably meet your needs, and if for some reason you're required to use XML-RPC, then use that. It all depends on your starting point and biases.


The difference is that your example uses a URL as input and outputs a JSON blob. The URLs still need to be constructed somehow and the output needs to be parsed if you want to do anything useful with this service. To compare apples-to-apples:

    >>> import urllib, simplejson
    >>> person_id = 1
    >>> person = simplejson.loads(urllib.urlopen('http://my-fake-service.com/people/%d.json' % person_id).read())
Now you have a data structure you can work with. But you've just traded a nice parameter-passing syntax for string building, and you're parsing JSON manually. Of course, you could wrap all of this with an API, and you probably should, but XML-RPC does a lot of this wrapping for you.


Not so much "can't" achieve it, but doesn't.

With xmlrpclib, serving functions over XML-RPC is a library import, a server class instantiation and a one line instruction to serve a function, and on the client side an import, a connection and a function call.

After that they behave much as any other functions (+latency), parameters are encoded and return values decoded all behind the scenes, neither the server function nor the client need have any special 'written-for-XML-RPC' changes for it to work.

It's more a case of a well implemented Python module than an inherent win over REST - there could be a similar module for REST-RPC created, but AFAIK there isn't one at the moment.


REST-RPC is obviously possible, but would be additional stuff on top of REST, and... I guarantee you that REST advocates would flip out. By and large they think that having things that don't cleanly map down to RPC is a benefit; they would object that a function call lacks the richness to fully represent the results of a REST invocation: Did you get a cached value back? Can you get to the HTTP headers? Can you set cache header on the request itself, or other HTTP headers? And there's a lot of truth here. Basically, REST-RPC would be "an RPC mechanism built on top of REST", but would not be "REST that is also RPC"; REST-RPC must necessarily be at least one of "a significantly limited subset of REST" or "more complicated than a simple function call".

Despite the fact I'm not willing to write off XML-RPC as absolutely useless as I believe there are times for simplicity, I also tend to agree that for industrial strength purposes RPC is a bad metaphor and you basically lose if you try to use RPC. It's just that I don't think REST is the right direction; I head more in the direction of message passing, as laid out in

http://www.erlang.org/pipermail/erlang-questions/2008-May/03...

And REST would agree with at least some of that; at least REST isn't hiding the fact that your function call is not local from you. But it falls afoul of many of the criticisms, too.


I like how XML-RPC support is built into Python. If a server supports XML-RPC, I can get an instant command-line interface that feels like using an ordinary library. The data structures that XML-RPC supports are mostly the same as JSON: arrays, hashes, integers, floats, and strings. There are also date/time values, which are automatically deserialized as Python datetime objects, and binary values which are transparently base64-encoded. Being able to use native types is a plus over dealing with XML trees. The lack of object types, in contrast with SOAP, is a feature in my opinion, since it keeps the interface portable across many languages.

XML-RPC uses XML and HTTP, but it hides both from the programmer. I think it is a mistake to criticize XML-RPC for not integrating deeply with HTTP, because HTTP is not really the point of XML-RPC. It's just an implementation detail. It happens that HTTP and XML libraries are everywhere, so XML-RPC lets you tunnel a simple RPC protocol through an infrastructure that is common across many programming languages.

Every REST interface I have tried to integrate with uses HTTP error codes differently, treats the GET/POST/PUT/DELETE commands in its own peculiar way, and has unique requirements for authentication. I am not at all sure that deep HTTP integration is appropriate for APIs. Regardless of the protocol (or "architectural style", if you insist), documentation of data structures, calling conventions, and error codes is essential, and just saying "we use HTTP" is insufficient to communicate these details.

Disclaimer: I wrote the xmlrpc-light library for OCaml. http://code.google.com/p/xmlrpc-light/


Every REST interface I have tried to integrate with uses HTTP error codes differently, treats the GET/POST/PUT/DELETE commands in its own peculiar way, and has unique requirements for authentication.

I can't defend systems that (for example) misuse HTTP status codes or ignore HTTP authentication, but this puts it at the same level as XML-RPC, which re-invents error codes, methods, and authentication every time.

documentation of data structures, calling conventions, and error codes is essential, and just saying "we use HTTP" is insufficient to communicate these details.

I absolutely agree. Documentation is essential for a web service, regardless of how you provide that service.


To say that some systems "misuse" HTTP status codes implies that there is a correct way to use them. What is the correct way to use HTTP status codes for an API? They weren't even designed for APIs. Where is this specified? Since REST is just a style, not a standard, correct use of status codes is undefined. As a result, everyone uses them differently. For example, what is the correct status code to send when a new resource has been created?

XML-RPC does not try to force an existing set of error codes on you. You start with a blank slate. This means you do have to come up with a plan for how you are going to use error codes, but the same is true of REST; the only difference is that you don't have to fit your error model to an existing set of codes designed not for APIs but for serving documents to a web browser.


For example, what is the correct status code to send when a new resource has been created?

From http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

201 Created The request has been fulfilled and resulted in a new resource being created.

This isn't to say you shouldn't document how your service responds to requests, just that there are response codes you can use that are widely understood.

If you want you can respond to everything with 200 OK and require clients to parse responses to understand how your service handled a request, but again this is the same thing XML-RPC gives you.


So then, sending a 302 redirect in this case would be incorrect? That is how most web forms operate, and the web is RESTful by definition.


The web is not, in general, RESTful. State-changes on GET are common, for instance. Client state is regularly held in server-side sessions. PUT and DELETE are essentially unsupported.


I hope I'm not beating up a straw man here, but I have seen many claim that REST is a good solution that scales well because the web is built on the REST philosophy. The principles of REST are why the web is a success, and so if we use REST, we can attain the same benefits. What you seem to be proposing is that the web is not a good example of REST.

If this is the case, is there any good example of REST? I have yet to see a web service API that does not require one to read the docs to do URL construction, which violates the HATEOAS principle. The only examples I know of where HATEOAS is satisfied are HTML forms, as I mentioned, Atom feeds, and OpenSearchDescription. None of these are elaborate APIs (or APIs at all) where one would even need to consider something like XML-RPC.

I think HTML forms are a perfect example of REST, since they supply the client with all necessary information to build the next request, and URLs do not need to be constructed manually. The way 302s are used to redirect to the newly-created resource makes web browsers behave more RESTfully, since they prevent reloads from resubmitting POST requests. This is a very common practice today, and I would consider it one of the best examples of REST done right, and yet it uses a different HTTP response code than the proposed standard of 201 Created.

All I'm trying to do is show that there are at least two codes, 201 and 302, that properly RESTful services might return, to support my argument that response codes are not uniform across all REST services.


"I hope I'm not beating up a straw man here, but I have seen many claim that REST is a good solution that scales well because the web is built on the REST philosophy. The principles of REST are why the web is a success, and so if we use REST, we can attain the same benefits."

I don't think this is correct. REST piggybacks on the web's architecture, but I wouldn't say that the web is built on the REST philosophy, in part because REST postdates the web's initial growth. The web was a success almost in spite of itself, but REST wins by cherry-picking the good bits.

"If this is the case, is there any good example of REST?"

There is an interesting discussion of this and HATEOAS here: http://www.suryasuravarapu.com/2009/03/restful-api-and-hateo.... The examples given are Amazon S3 and the NetFlix API.

"The only examples I know of where HATEOAS is satisfied are HTML forms, as I mentioned, Atom feeds, and OpenSearchDescription."

A plain GET link can be an embodiment of HATEOAS. Remember, it's application state, not resource state that's key here.

"I think HTML forms are a perfect example of REST, since they supply the client with all necessary information to build the next request, and URLs do not need to be constructed manually."

Some are, some aren't. It depends what HTTP method you're using and what the server does with the request that makes it RESTful (or not). From that perspective, plain links are just as RESTful as forms - provided they're treated correctly.

"The way 302s are used to redirect to the newly-created resource makes web browsers behave more RESTfully, since they prevent reloads from resubmitting POST requests"

Preventing resubmissions is more about interface design than REST, though. From REST's point of view it's not wrong to repeat a POST. I'd argue that strictly speaking either a 201 or a 302 might be correct, depending on what you're POSTing. If you've just POSTed to a collection URI, then a 201 response with a list of collection member resources would be just as correct as a 302 with the address of the newly created resource. The latter is more conventional, but as I say, that's a UI concern, not because one is more RESTful than the other. There are other ways to protect the server from resubmission problems, but they involve more work for the developer.

If you're PUTting, I'd argue that a 201 should always be correct, but given that browsers don't support PUTs directly anyway, that's not relevant here.

"All I'm trying to do is show that there are at least two codes, 201 and 302, that properly RESTful services might return, to support my argument that response codes are not uniform across all REST services."

I hope I've shown that it's not in any way inconsistent to support both.


You have. Thank you.


  but now we're discussing specific implementations, which is a separate issue.
The author states that specific implementations is a separate issue, but it really isn't. You can't ignore that a good implementation will mean that people will actually use your pet mechanism over someone else's. Most people don't want to reinvent the wheel, so they're not going to be writing the lib to handle the wire protocol, they're going to use a library that does what they need.

REST is heavy, and XML-RPC unnecessarily so. Facebook's Thrift (http://incubator.apache.org/thrift/ ) is a lightweight alternative to both.

(That said, REST in Python is fairly easy - http://developer.yahoo.com/python/python-rest.html )


"Thrift is a software framework for scalable cross-language services development. It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml."

I have a very hard time believing that is a lightweight alternative to anything.

For REST all you need is an http library.


Ah. The on-wire protocol is what is lightweight. The framework certainly isn't. For simple REST via an http library, how many bytes actually have be encoded before you send it, and be decoded at the other end? (Keeping mind that Thrift originated at Facebook may make the overhead from REST seem more relevant.)


First let me say I'm not arguing for XML-RPC. See my actual question at the bottom.

One nice thing about protocols like XML-RPC is they're well defined, unlike the somewhat abstract idea of "REST", which leaves a lot of room for interpretation.

You end up with lots of different REST API styles created by people of various levels of competence and understanding of what REST is. It makes it harder to create client libraries that handle the mundane aspects of communicating with these APIs.

Projects like httparty and rest-client do help somewhat, but they end up injecting their opinion of what the "correct" way to do REST is.

So my question is: has there been any effort to codify REST API best practices into a lightweight REST "protocol", such that it would be easy to write generic client libraries, yet still obey the principles of REST?



XML-RPC is just what it says, a protocol for calling remote functions, isn't it?

It gives you a unified means to specify functions, along with parameters and their types, and gives errors when the requests don't match the definition. With a REST implementation you roll yourself, you'd just have to re-implement this stuff, wouldn't you? Or what am I missing?

Tricks like appending ".jpg" to the url seem a little hack-ish to me. That feels like a parameter. Now what if there are two parameters, how do I know which comes first if someone else wrote it who doesn't use the same style as I do?

If I'm querying for an employee, is it clear that I should use employee/110.jpg and not employee.jpg/110 or employee/110/jpg or employee/110/query/jpg, or somethere else?


With a REST implementation you roll yourself, you'd just have to re-implement this stuff, wouldn't you? Or what am I missing?

No, you wouldn't. as generalk sayed in his comment (http://news.ycombinator.com/item?id=1053384), it's the reverse, namely with REST over HTTP you'll make use of HTTP, which perfectly provides the functions to accomplish the task of invoking a method remotely without having to implement this important part on your own. I'll try to make an example.

let's pretend you have a method called GetEmployeeByName(string name).

with REST you'll implement this method and allow your users to invoke it via http://myservice.com/employee/simpson. the invocation, parameter-passing, error-handling etc. is done via HTTP, so theoretically no work is needed on this part, since you don't have to implement HTTP.

with XML-RPC the thing is pretty different. you'll implement the method, but what you have to do now is -- as described in the article -- find a library (or implement your own) that will generate all the needed XML files etc.

you see, with REST over HTTP you just skip the XML-generating-and-some-more-task and let HTTP do the work. why reinvent the wheel when HTTP is suited pretty well for this part of the task?

I hope this clarifies everything a little bit.


> you'll implement this method and allow your users to invoke it via http://myservice.com/employee/simpson

That's fine of course for the trivial example, but what if my third parameter is an array, and I want to see a sensible error if I pass a hash table? What are the rules for which parameters go in the URL, whether they are delimited by ".", "/", or something else, and when they come as JSON in the request body, and how many there are, and what format they should come in?

> find a library (or implement your own) that will generate all the needed XML files etc.

Is this a problem? Many many such libraries exist which make exposing RPC calls nearly as easy as writing function declarations. (After all, that's what it's for - remote functions.) When I'm writing normal functions, I don't pass a request type, and a parameter string with various arbitrary delimiters, along with a possible big blob of text, and use that to fit every possible function, dealing with it in my own individual ad hoc way, do I?


I have to admit that I haven't experimented a lot with REST, so maybe some REST-experts here on HN will help me out answering some of these questions or give feedback regarding the answers that will follow in the next paragraphs :)

regarding question 1: I think it's all a question of how your method behaves -- or how it is implemented. if your method requires JSON, then you'll have to feed it JSON. if your method requires 2 parameters, then you'll have to give it 2 parameters (e.g. myservice.com/?param1=123&param2=456).

regarding question 2: no, it's definitely not a problem, until the library is not maintained anymore or god knows what else. well, yes, it's more intuitive to write a plain simple function then to construct a very complicated request string as you mentioned.

as I already stated above, I don't have enough experience with REST at the moment to give you a more precise response, so I hope this one suffices. I also hope that somebody here on HN with more experience in the REST field will answer your question more precisely, so I can learn a thing or two, too :)


It gives you a unified means to specify functions, along with parameters and their types, and gives errors when the requests don't match the definition. With a REST implementation you roll yourself, you'd just have to re-implement this stuff, wouldn't you? Or what am I missing?

I see things in the reverse: HTTP and REST give you a way to make requests, specify parameters, indicate errors, authenticate, etc. XML-RPC ignores all of that and makes you re-implement all of those things.

Tricks like appending ".jpg" to the url seem a little hack-ish to me. That feels like a parameter. Now what if there are two parameters, how do I know which comes first if someone else wrote it who doesn't use the same style as I do?

Those are meant to resemble file extensions, but they can be thought of as parameters.

If I'm querying for an employee, is it clear that I should use employee/110.jpg and not employee.jpg/110 or employee/110/jpg or employee/110/query/jpg, or somethere else?

It's not standardized, just like XML-RPC isn't. Either way you can read the documentation for your service and it'll tell you.

Alternatively, in REST everything has a URI. This has two interesting properties:

1. You can always access a resource by its unique URI: http://example/employees/110.jpg

2. As seen on the WWW: hypermedia links. If you request /employees.json, for instance, I can provide a structure like:

  {name: "John Employee", related: { image: "http://example/employees/110.jpg", performance_review: "http://example/employees/110/review" } }
Which means that with your response you can easily request those URIs and get the resources. Those URIs should never change (as mentioned), but HTTP provides for redirection if they do. XML-RPC does not provide for this at all.

(edited for formatting)


HTTP and REST give you several ways to make requests (PUT vs. POST? Fall back on POST for clients that don't support PUT? How?), several ways to specify parameters (URL path fragments, query strings, headers, or POST data?), and a set of standard errors that you do not control and which are not used consistently. I like how XML-RPC ignores all of that, because there are too many ways to do the same thing.


If that works for you, awesome.

HTTP and REST give you several ways to make requests (PUT vs. POST? Fall back on POST for clients that don't support PUT? How?)

PUT is used for update operations and POST generally for create. Your library will abstract away the "client doesn't support sending PUT" so you don't deal with it. Same way XML-RPC libs abstract away the ugly portions of XML-RPC.

several ways to specify parameters (URL path fragments, query strings, headers, or POST data?)

Yes, there's more than one way to do that. Generally a service's documentation tells you what to do, so I'm not seeing the issue, but I concede that someone might find fault with that.

and a set of standard errors that you do not control and which are not used consistently

Whereas XML-RPC gives nothing, and everything is re-invented every time. Again, if that's a better solution for you, awesome, but for me the "some people use HTTP status codes to mean different things" isn't a reason to throw the whole thing out.


PUT is used when you know the URL you are creating and you can send a whole resource. This, by convention, has been interpreted by many to mean updates, but not everyone agrees. The Basecamp API for instance uses "PUT /todo_items/#{id}/complete.xml" to mark a TODO item as complete. Is this creating a new "complete" resource, or updating a "todo_item" resource? Either way, when trying to make an API fit the REST style, one must fit everything into the standard verb / filesystem-like mold. With plain-old RPC, you name your methods whatever makes sense to you, using your time-earned experience in designing regular libraries, and you don't waste a moment on these nitpicky decisions.

The lack of consistent client support for PUT and DELETE further complicates this issue. As you say, you can abstract away the emulation of these verbs, but what library are you speaking of? A single-purpose solution to wrapping a particular REST interface? Or a one-size-fits-all REST wrapper that works with any server? I'm not sure the latter is possible, since there are once again multiple ways to do this - "?_method=PUT"? X-HTTP-Method-Override? Or something else entirely... it's not standardized, and REST isn't a spec, it's just a style, so everyone does it their own way. All of this so that we can write "PUT" at the beginning of the HTTP request line instead of further down or to the right? What is the actual benefit to all of this extra work?

The biggest issue with parameter-passing style is the lack of a standard way to pass complex data structures. It's as if every function took only a single argument, a string, and had to run a regex on it, splitting the results on delimiters, each in its own special way. If a regular software library were written this way, it would be appalling. At least PHP gave it a shot, with its arr[]1=&arr[]=2... syntax, but nobody's going to standardize on that, since query strings are unfashionable now, and nobody likes PHP anymore. I honestly think we should go back to query strings and stop embedding data in URLs because at least you get named parameters. And if you subscribe to the HATEOAS camp, they're more RESTful since we already have one standardized, well-documented way to provide clients all they need to build them (HTML forms).

You're right, just because people don't use something consistently doesn't mean you should throw the whole thing out. I didn't mean to imply that. What I stand by, however, is that HTTP status codes were not designed for APIs, and that as a result they are guaranteed to be used inconsistently when used for the purpose of designing APIs. At least with all-custom error codes there's no expectation that everyone will interpret the HTTP standard correctly as applied to a problem it was never intended to solve.


> Tricks like appending ".jpg" to the url seem a little hack-ish to me. That feels like a parameter.

that feels like a file type. If there is more than one representation available then it's a parameter, indeed, but the standard place to put that is on the end of the filename.

Surely it's clear that: employees is the collection; employees/110 is a specific employee; employees/110.jpg is the jpeg representation of said employee.


Shouldn't you be using HTTP Accept headers to specify the format(s) you want? (rather than file extensions in the URL)


the spec says they can be used for that and you could easily have a server that checks the HTTP Accept if there is no extension. Most likely you need to hyperlink to many of the resources so you're going to need the extension anyway (eg /employees/110.html has an img src="employees/110.jpg")

If it's for a programmatic client, not a browser, the HTTP Accept would probably be sufficient.


Indeed. The parameter-passing mechanism is something XML-RPC decides for you, rather than leaving the task of string concatenation (or XML message-building) to you. One might even call this "opinionated".


It's shameless-plug-time, I'm afraid:

  http://github.com/regularfry/xmlrpc_annotations
A library I knocked together a while ago to generate C# interface files for an XMLRPC service by annotating the Ruby source. It could be extended to other languages, but I never bothered.

I used it to serve the interface file directly from http://example.com/service/Foo.cs. Works nicely for what it does.

To bring this comment lurching somewhat closer to the topic at hand, would this be a feasible approach for RESTful interfaces?


The first answer that comes to mind is tooling. At one extreme you have SOAP, where your IDE can make complete proxy objects from a WSDL file. At the other end is REST, where normally you're in luck if there's a nice client library to translate request/response into objects, but otherwise you're left with API documentation. (Is anyone really using WADL?) Somewhere in between is XML-RPC, where you're more than likely to find some kind of reflection method to call such as client.get_methods or something similar.


I'm going to take this opportunity to shamelessly plug an open source project I've started for easily turning requests/responses into .NET objects: http://restsharp.org


There is a de-facto standard for that: system.listMethods to list supported methods, system.methodSignature to get the parameter and return types for a method, and system.methodHelp to get its documentation. It's not an ideal solution, since method signatures are shallow and not every server implements these methods the same way (or at all), but they are enough to do basic code generation for statically-typed languages. Here's an example of that for OCaml: http://code.google.com/p/xmlrpc-light/source/browse/trunk/ex...


I'm not much of a fan of the bloat that SOAP has acquired, but to me the point of something XML-based isn't so much the transport side but instead the API-publishing side. The point of a WSDL or similar descriptor is to define what's available: what methods you can call, what types of data they take, and what sorts of data they return. Generally, you can query the server for the WSDL in addition to actually executing the calls. To me that's the primary advantage, and the reason why a lot of clients prefer SOAP (or some other XML-RPC approach): it's because it provides a clear, published API that you can hand off to someone and have them program against, and there are tools for basically every language that can interpret that API. For statically-typed languages, those tools can also help make sure that your calls are at least syntactically valid, i.e. you're not making calls to methods that don't exist, or passing Strings in for integer parameters, or referencing fields off the result object that aren't there, etc. That might not seem like a big deal, but when you're calling a totally unfamiliar API, it can help you get up to speed much faster, and when you have clients that rely on that API, it really helps everyone's peace of mind when they can be 100% sure that API isn't changing on them.

I'm not aware of an similar, standardized, machine-readable equivalent for REST service documentation, though I'd certainly love to know about it if there is one.


True REST abhors the idea of separate "standardized, machine-readable documentation" like WSDL. The biggest point of REST is that Hypertext Is The Engine Of Application State -- but unfortunately most people focus on bullshit naming conventions that are largely False REST.

Every response should contain hyperlinks to other resources, where the structure/metadata around it indicates what's on the other end. It should still be self-describing even if the URL is completely opaque -- "nice URLs" do not make you RESTful.

You should never need an "endpoint" or any documentation at all, whether machine or human-readable. I should be able to explore your whole API just using HTTP. Ideally you'd use the exact same URLs for your API and browser-human interfaces, using the Accept: header to get different representations of the response.

Almost everybody fucks this up, and just uses 'meaningful' URLs that you're expected to build yourself, and lots of sassy human documentation to tell you what the constants are.


I would then argue that REST and XML-RPC fit in different niches and solve different problems, even if there's some substantial overlap between them, so people like the OP that argue that REST completely obviates the need for XML-RPC is missing the point.

Human exploration of a loosely-defined API is great for certain use cases, but doesn't provide the hard API contract and level of tooling that is what lead to the adoption of SOAP in the first place. So people like the OP that whine that people don't get it, and insinuates that everyone who uses XML-RPC instead of SOAP is some sort of unenlightened idiot, aren't making a compelling case if they just ignore the whole reason why SOAP is popular.


Keep fighting the good fight, XML-RPC needs to die.


But our clients use XML-RPC

Touché.

Client: But your design is in Latin, which I don't speak...

Engineer: Yes, but Twitter is written in Latin...

Client: You're fired.

Client needs as an afterthought are why engineers are considered by most to be professionals, while programmers are considered fodder for outsourcing to low-wage countries.


Just in case there's some confusion: that's exactly my point. Not the part about engineers vs. programmers, but the "client needs trump personal preference" bit. In our case, in order to be players in the industry, we have to support XML-RPC/SOAP.

(edit: typos)


XML-RPC is useless now that BERT-RPC exists, imo.


XML-RPC is still supported by more languages, and doesn't require third-party libraries for Python or Ruby users. But this looks interesting, thanks for mentioning it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: