
Introspected REST: An Alternative to REST and GraphQL - zaiste
https://introspected.rest/
======
politician
It looks like the meat starts at section 9.

\- Sections 1-8 are a summary of REST, HATEOS, their problems, and reflections
on the problem that every endpoint is `application/json` and not something
more specific to the intended use case.

\- Sections 9-11 discuss a variant of REST that doesn't use Media Type
differentiation for capabilities, but rather composable nuggets of semantics
called MicroTypes. The MicroType schemata is accessible through an
introspection interface; one suggestion is to use (cacheable) OPTIONS. Clients
would transmit their desired MicroTypes to an endpoint using the Accept-Type
header.

If I'm reading this correctly, the author is suggesting the clients be allowed
to specify the middleware chain used by the server to compose the response
data (cf pagination, query, filter MicroTypes), and that these programmable
chains are attached to every traditional REST endpoint collection. In one
sense, this is an attempt to marry the resolver chain concept from GraphQL
into the world of RESTish JSON APIs.

I think you could pull this off only with a dedicated server library because I
don't see real world developers using this technique successfully on the
current crop of HTTP webserver APIs. There's also the very real performance
issues that come up when clients are allowed to control resolution; we see
companies using GraphQL in production locking down the ability to do custom
queries.

~~~
hliyan
I think the best argument the author could have made is an example. If a
proposal like this cannot be illustrated in an example (like it can be done
for REST or GraphQL), it's likelihood of adoption is going to be low...

~~~
speedplane
What I got out of it... REST as practiced now is bad. To find out how to make
it good, read this 40 page manual. I prefer to use the system that doesn’t
require long form specification reading.

------
hermanradtke
There is some good stuff in here, but it misses the biggest pain point
developers face when trying to make a robust hypermedia API: tooling. There is
a lack of tooling on the server side and almost no tooling on the client side.
The amount of effort required to serve a RESTful API requires a lot of upfront
investment. Then convincing the client developers to take advantage of the
hypermedia affordances requires a lot of time and energy. If one does manage
to convince them, the client tooling requires even more upfront investment.
The only attempt at client tooling I have seen is the work done by Mike
Amundsen.

One thing GraphQL got right was focusing on the client tooling. If the client
developers are bought in to the protocol/specification then the server
developers will naturally come along. The reverse has not been true in my
case.

This opinion is based on my experience building hypermedia APIs, consuming
hypermedia APIs and helping with the HAL hypermedia spec.

~~~
dustingetz
This tooling-first approach is basically what we're going for with
[http://www.hyperfiddle.net/](http://www.hyperfiddle.net/) – the requirements
of the killer app are what drives the hypermedia API, the mechanics of which
are extremely innovative and weird. One key difference is Hyperfiddle's I/O
layer is decoupled from transport, it is not limited to http or client/server,
which opens a whole spectrum of I/O configurations with different performance
characteristics, including "ship the api definition over there so it can run
near that secure database" like html/javascript layer apps are shipped over
the wire. We think Hyperfiddle can emit Siren-compliant representations (or
any other general purpose hypermedia mimetype), though Siren can not express
the entire continuum of I/O and data ownership that Hyperfiddle's protocol can
(and thus Hyperfiddle probably cannot be built directly on Siren – _the tools
must come first_ ). We solve the caching problems with an immutable database
(Datomic) which permits idealized caching of everything at every layer. If the
constraint of an immutable database sounds like a dealbreaker today, it
probably is; but it unlocks a whole new frontier of capabilities that apps ten
and twenty years from now will require. Why are we designing new protocols for
the requirements of yesterday?

~~~
hermanradtke
This is definitely interesting and I will spend some more time checking this
out. This tooling looks more advanced than pretty much anything else out
there, but from what I can tell it is a server-side driven approach. I have
been trying to imagine what tooling that is client-side first looks like. It
should not matter _how_ a server implements something, such as Siren, only
that it properly follows the semantics of the Siren protocol.

~~~
dustingetz
It can be server driven, but it doesn't have to be. It depends where the data
is and what the permissible access patterns are and which process is
responsible for enforcing them. These days the data that matters is in server-
side databases with tightly controlled access patterns so it is pretty weird
for the client to be in charge. Hyperfiddle's data protocols are sufficiently
abstract to run anywhere in the continnuum of data ownership [1], but so far
we've only bothered to implement the parts of it that matter to today-era
businesses. Are we thinking about this the same way or have I missed the mark?
[1] [http://www.dustingetz.com/:urbit-continuum-of-data-
ownership...](http://www.dustingetz.com/:urbit-continuum-of-data-ownership/)

~~~
pdimitar
Wow, this is really interesting. Wish I worked on it!

What tech and languages do you guys use? And are you hiring?

------
bjt
Also submitted a couple months ago. Only got a few comments back then:
[https://news.ycombinator.com/item?id=15211604](https://news.ycombinator.com/item?id=15211604)

The first sentence of TFA captures one of the most annoying things about REST
discussions: "In this manifesto, we will give a specific definition of what
REST is, according to Roy, and see the majority of APIs and API specs
(JSONAPI, HAL etc) fail to follow this model."

At this point I've read or heard dozens of claims like this, that almost
everyone is doing REST wrong. It's well past time to stop blaming all the REST
implementers in the world for being too dumb to understand Fielding's
brilliant vision. If most software developers can't get REST right, then
either proponents have consistently done a crappy job explaining the idea, or
it's not as great an idea as they think.

Skimming the table of contents, it looks like the authors have thought deeply
about the problems with REST and come up with some well-reasoned solutions. So
they're answering the "REST vs Introspected REST" question. But much more
relevant to me is the "Introspected REST vs GraphQL" question. What would make
someone choose this over GraphQL? Introspected REST has a lot of catching up
to do to match GraphQL's tooling and market share.

~~~
vasilakisfil
Author here: The problem with GraphQL is that it has to re-invent everything
on top of HTTP. Introspected REST reuses HTTP properties and architecture by
default, making more robust and compatible with existing clients. Also
related:
[https://news.ycombinator.com/item?id=18425581](https://news.ycombinator.com/item?id=18425581)

------
jtms
The developers that would hypothetically adopt and implement a new paradigm
like this would probably love to just see some straight forward examples of
client and server usage

------
tannhaeuser
Can we just have back SOAP/WSDL and return to rational design and
interoperability, or at least limit "REST" to a web facade? I think after
10-15 years of mucking around with "REST" (or what people think it is, as
rightfully pointed out in TFA) it's very clear that there's not going to be a
common understanding, let alone standard for it. As a freelancer having worked
on maintaining many "REST" trainwrecks, I can tell you that naive REST
spaghetti is absolutely much worse than any SOA design ever was. The technical
debt and high maintenance might not be apparent while you happily code away
your new "microservice"; but I can assure you you've just traded a tiny bit of
upfront design for a long-term puzzle you're leaving behind.

Did you know WSDL has supported "REST"-like encoding of parameters in URLs and
operations as HTTP verbs since 2001?

~~~
unscaled
Even assuming we'd want XML as the message/schema format back (with all the
security, performance and readability issues it entails), SOAP/WSDL/WS-* was
any better than rest in standardization back in the day.

For instance, there were multiple different ways you could format your message
(RPC/encoded, RPC/literal, Document/encoded, Document/literal,
Document/literal wrapped) and different implementations supported different
formats. There were all kinds of extra features that were never supported
across board like multi-part messages etc.

Before WS-I there was practically zero guarantee that two SOAP implementations
would ever be able to interoperate (and please remember, while REST has no
standard in practice, it's dead simple to implement REST by hand - the same
couldn't be say about SOAP!). WS-I only came out in 2004 or so, and by then it
was already too late. I'd say the SOAP ecosystem is _the_ definitive case
study for trainwrecks.

If you really want a standard method for RPC then you're much better off with
a modern implementation like gRPC or Thrift or Cap'n proto. Please don't go
back to the nightmare called SOAP.

~~~
dfox
I feel that the point is not about the transfer representations but about the
interaction model. REST makes sense as long as you can sanely map your
interactions onto simple modification of something that can be meaningfully
described as "resource". For typical application that implements non-trivial
bussines processes this means that you either expose low-level implementation
details (ie. how you internally represent progress of some process) in your
API or you implement "REST" API that is sufficiently far from what REST is
supposed to mean that it stops to make sense to use that moniker.

On the other hand SOAP is good match for such applications because it is
simply an RPC mechanism, albeit with unnecesarily complex marshalling and
transport layers underneath.

------
dpim
This looks a lot like OData - a REST-ful API standard with schema
introspection, patterns for defining and traversing resource relationships,
well-defined guidance around mechanics, tooling support. In particular, the
"Microtypes" concept resembles how entities work in OData - rich query support
for collections (eg. sorts, filters, order by), "expansions" on related
resources, even inheritance semantics (ie. being able to request a derived
entity as its parent type).

~~~
felixfbecker
The fact that OData calls itself not only RESTful, but literally "the best way
to REST", while using requests like this:

    
    
        GET serviceRoot/People('russellwhyte')/Microsoft.OData.SampleService.Models.TripPin.GetFavoriteAirline()
    

is an absolute _insult_ to REST and the target developer audience.

[https://www.odata.org/getting-started/basic-
tutorial/#bounde...](https://www.odata.org/getting-started/basic-
tutorial/#boundedFunction)

~~~
dpim
I think this is a bit cherrypicked. The example you're using is a fully
qualified bound function. OData support for actions and functions explicitly
exist to provide affordances for how to do RPC within OData. You can easily
model this API in OData without requiring a function (eg. having a navigation
property reference called "favoriteAirline"). Moreover, you can typically
invoke functions without a fully qualified prefix (save cases where there is
ambiguity).

For the most part, OData does a good job at letting folks opt into complexity,
allowing integrators to make full use of APIs without needing to know anything
about $metadata, inheritance mechanics, functions, etc.

------
vbezhenar
I never understood why people love REST. Just use some kind of JSON-RPC, or
something like that. In my experience most of so-called "REST" interfaces are
just poorly written RPC. Some people even think that REST means HTTP + JSON.
It's extremely rare to encounter a true REST interface with e.g. HATEOAS,
proper caching, etc. And if many developers can't utilize technology, probably
technology is not good enough to be commonly used. Web Services with their
WSDL were the best thing. They were too complex, they use XML which is
apparently out of fashion today, but the idea was solid, we just need
something simpler but with good enough tooling.

~~~
rtpg
REST aligns _very_ well with CRUD. Most software is, basically, CRUD.

So by saying "we're going to go with REST", you can determine about 90% of
your API design more or less instantly, and it rarely gets in the way.

You still have to get your domain modeling right, and sometimes you need to
make some extra resources. But you have a design that, basically, works.

~~~
tannhaeuser
I disagree that most apps are CRUD. To the contrary, most apps starting out as
naive CRUD have complex implicit constraints related to eg. in what state you
can modify resources in a particular way.

Even for CRUD apps with simple master/detail data relationships it doesn't
make sense to tie your domain design to network requests.

------
vinceguidry
Generally speaking, I think a lot of developers get into trouble by not really
putting the effort into grasping the _ideology_ of architectural frameworks
like REST, 12factor, and React.

For example, our API needed a way to serve a different view of an existing
model. How do I get the API to know which view to serve? When I asked around,
they said the best way was to do /cars/1/prices, which I didn't like because I
feel it breaks REST. There's no price model to the car, prices are fields of
cars, at least for now.

I had to think for a few seconds before coming up with just using a query
parameter to set which view to serve, /cars/1?view=pricelist, preserving REST.
But most coders just take the first thing that comes to mind, and then wonder
why their applications are so messy after a few years.

Coders seem to not want to bother learning how existing solutions are
_supposed_ to work before jumping to a half-baked newer solution just because
it seems more intuitive. If you understand REST, then you can see how GraphQL
can improve certain aspects of API interaction.

But it's not a panacea any more than React is. If you understood how HTML,
CSS, and Javascript are supposed to work, then you can see how React improves
on it. But if you can't then your React applications will be just as
horrendous as your jQuery ones were.

------
jmakeig
REST allows for clients and servers to evolve independently. If you can
constrain that you can probably design something significantly simpler. That’s
not a shortcoming of REST, though. The channel between your mobile client and
your backend would probably be better served with something more like RPC than
academic REST.

~~~
vasilakisfil
That's our proposal/challenge here, come up with a model that tries to merry
the best of both world (RPC and REST) but still adheres to HTTP semantics and
allow clients/servers to evolve independently.

------
pkz
Kudos to the author for going the extra length of trying to maintain semantic
interoperability and reuse by showing how to be backwards compatible with e.g.
JSON-LD. This makes it possible to continue to build upon all the vocabularies
already created.

~~~
vasilakisfil
Thanks! That was the real challenge!

------
vasilakisfil
Hi, author here. AMA. But I would like to make a couple of points.

For starters, a lot of people are looking for a TL;DR. There is no such thing.
Same with tooling.

This is an open publication and should be considered as is. The intention was
to come up with a better architectural design than REST, reusing existing
Internet architecture. And that was the real challenge. Because REST and HTTP
have been built almost by the same person. Bending existing Internet
architecture to fit another architectural style for networked services is
extremely difficult, but apparently not impossible.

GraphQL for instance, uses HTTP just for the transport layer. That's a big
assumption to make there, anyone can build very flexible stuff if you are
about to re-design everything HTTP gives on top of HTTP.

Another note is the reason that it takes so much time to get to the actual
model (section 9) is to make sure the reader understands what is REST and
where REST fails. And I haven't really seen any other document/publication
explaining REST so extensively. So for those who are complaining that none
gives a definition of REST, then go through section 1-6 and you should have
it.

Last but not least, Introspected REST is compatible with existing REST
architecture, it just makes it more robust and flexible. So for the tiny hello
world example, it doesn't really make any difference other than exposing some
meta data through the OPTIONS endpoint, like the (JSON) schema and the
linking. But for complex APIs it should give huge advantages compared to REST.

And again: this is an open publication. From that to actual implementation
there are many steps needed to be taken (for starters defining the necessary
microtypes).

------
pwpwp
TLDR?

~~~
jtms
Yeah seriously... +1 to this. Even just a handful of examples of client and
server usage would be far more useful for determining if it’s worth digging
deeper

~~~
porphyrogene
This is a formal proposition. If it catches on I expect others to explain it
in various levels of complexity but this document is intended to be dense and
deeply descriptive.

~~~
GordonS
Sure, but couldn't it begin with an executive summary?

If you take RFCs as an example, many start with a simplified description or
problem statement before then proceeding to get into the details.

------
friedman23
ok I think I'm missing something here

------
tinyvm
I wasn't able to read this entirely it's a very dense spec.

That said , even if the arguments in this spec are solid the entire industry
are sold itself to GraphQL.

GraphQL has won and it won't change.

~~~
arnvald
2007: SOAP has won and it won't change

2012: REST has won and it won't change

2018: GraphQL has won and it won't change

2023: ...

~~~
james_s_tayler
Hahahahahaha.

Perfect.

Maybe 2023: gRPC has won and it won't change

