
How should we build the APIs of tomorrow? - mooreds
https://increment.com/apis/how-we-should-build-apis-tomorrow/
======
AriaMinaei
A few ideas to put out there:

1\. Emphasize synchronization over imperative API calls. _Imperative APIs
encourage data silos._ They are the underlying technical part of the problem
that Zapier and ITTT try to solve. See [0] and [1] for some ideas.

2\. Allow users to submit "agents" rather than "requests." An agent is a small
program given extremely limited resources (limited process/memory/runtime
budget) that you can safely run on your server. It is a possible answer to
many of the standards/protocols that wouldn't exist if frontends were running
on the same machines as backends. [2]

3\. Emphasize composition over integration. Functions compose (nest, recurse).
Event emitters don't. As long as APIs are built to be integrated rather than
composed, making them work together is a full-time or full-company job (eg.
Zapier).

4\. Make things immutable. Immutability allows APIs to "play" with one another
without fear of setting off the nukes (ie. side effects). It's possible that
this approach would make it so that integrating two APIs becomes a job for
ML/AI rather than humans.

[0] [https://github.com/braid-work/braid-spec](https://github.com/braid-
work/braid-spec)

[1] [https://writings.quilt.org/2014/05/12/distributed-systems-
an...](https://writings.quilt.org/2014/05/12/distributed-systems-and-the-end-
of-the-api/)

[2]
[https://news.ycombinator.com/item?id=23900749](https://news.ycombinator.com/item?id=23900749)

~~~
MichaelApproved
As an end user of APIs, agents sound like a good idea.

As a creator of APIs, agents seem scary from an optimization/caching point of
view.

I’d rather serve the same request 10,000 (which could be cached) than have
1,000 different agents making requests (which would be hard to cache).

~~~
nine_k
Agents should use a limited, non-Turing-complete model of evaluation. They
could combine API calls locally and run simple FSMs which are easy to formally
check for e.g. absence if loops. This could save round-trips without needing
to sacrifice general orthogonal APIs.

Imagine that you could give an app server a _formula_ to combine several API
calls, much like you give an SQL server a formula to join and filter tables.

Also, you can easily cache agents by calculating a hash of their source, and
call them repeatedly without resending their bodies, much like "prepared
statements" work in SQL databases.

~~~
AriaMinaei
I agree. Though not sure about the non-Turing-complete-ness. How about each
agent has a budget. Agents that blow the budget get dropped.

Like this:
[https://gist.github.com/AriaMinaei/69e1a9166e7ffbd61e7f6709d...](https://gist.github.com/AriaMinaei/69e1a9166e7ffbd61e7f6709d7a819b8)

------
jayd16
Enjoyable read but not much here. Strange that there's no concept of edge
computing or caching referenced in the article, from what I could tell.

~~~
alonsonic
Agree, the author doesn't paint the picture of what might be coming. Would
love to hear from the HN crowd on what's coming next in API design. I see a
lot of buzz around grpc, will this be the next standard?

~~~
crispyporkbites
It boils down to:

\- optimise the amount of data you send (see graphql/this persons agent idea)

\- optimise where you send it to/from

The latter has a hard limit of c, which we’ll always try to move towards.
Distributed computation helps, but trades off speed/consistency. The question
then becomes whether you can have an inconsistent model for those few seconds.

~~~
adamkl
I always thought this was a pretty interesting take on the future of web
applications/APIs, and aligns pretty closely with what you describe:

[https://tonsky.me/blog/the-web-after-tomorrow/](https://tonsky.me/blog/the-
web-after-tomorrow/)

------
xcambar
At the risk of being considered non-constructive, I wish the author made a
leap of faith and shared their educated guess regarding where the API
engineering is moving to.

I appreciate the effort of sharing a perspective, but I would have warmly
welcome a bit of prospective.

That being said, nice prose and interesting article nonetheless.

------
rumanator
It's a pretty good article but It's odd that auth wasn't mentioned in the
article. If chatty clients are a concern then I would expect that token-based
auth to be bundled with the problem.

------
cel1ne
Use REST for reading, but RPC-style calls for writing. Benefits:

1\. Every write-call is a transaction

2\. You can have complex filtering and sorting in the read/access side of your
API without having to worrying about the update-side of things.

3\. You can evolve the read-API, even change semantics and return different
objects and again, not worry about transactions.

4\. You are not limited to the HTTP-standard.

Here are also some good tips: [https://www.vinaysahni.com/best-practices-for-
a-pragmatic-re...](https://www.vinaysahni.com/best-practices-for-a-pragmatic-
restful-api)

------
anderspitman
> To work around these complexities, Google built Spanner, a database that
> provides strong consistency with wide replication across global scales.
> Achieved using a custom API called TrueTime, Spanner leverages custom
> hardware that uses GPS, atomic clocks, and fiber-optic networking to provide
> both consistency and high availability for data replication.

I find it interesting that most of this complexity simply falls away if users
host their own data. In my estimation, most people's computing needs would
best be satisfied with a smartphone + a raspberry pi in their house hosting
their data, protected by a simple auth scheme, and accessed using simple
protocols built on HTTP. That would be more than enough to access all their
photos and videos, documents, and social feed for their few hundred friends to
consume. Things like email would probably still best be handled by the one
cousin in the family who works in IT, to manage spam etc.

If only the technical side were the actual problem.

~~~
forbiddenvoid
Users don't want to host their own data. I think this has borne out time and
time again where hosted options win out over self-hosting repeatedly.

~~~
nine_k
This depends on the user; tech-savvy users may prefer a self-hosted version,
especially if it installs in a few clicks. But they are outnumbered by IT-
naive users whose only realistic option is the hosting by a vendor.

~~~
m11a
> tech-savvy users may prefer a self-hosted version

Not necessarily. I'd hate to self host stuff. It's a maintenance burden, time
wasted getting stuff to work with all of its complexities, and once that's
done you need to make sure it keeps working, that bills keep being paid, and
you're responsible for drive failures, backups, missing a bill, etc... And
that's despite my software and sysadmin experience.

I don't think it has anything to do with "tech-savvy". Hosted is just the
better option in almost all cases, especially at individual or small-medium
scale.

I actually think the converse to the typical self-hosted statement is true:
only a small number of tech-savvy users like to self-host. They just happen to
be a vocal minority.

------
rhn_mk1
Seems it's about client-server APIs, not API design in general.

~~~
ChrisMarshallNY
Yup. It's an important topic, but I've been designing APIs for decades; just
not the kind he talks about. I do things like device control APIs, these days.

The ones that I do have some issues that he doesn't cover, and a whole lot of
issues that he discusses are of no concern to me.

------
lazyexecution
In the APIs of tomorrow clients should ask servers to prepare operations
without executing them. Servers should return ids which clients may embed
within follow up operations, effectively allowing a client to construct a
complex request out of simple operations and combinators.

Eventually the client will ask the server to evaluate the request and produce
a result.

This lazy style of execution is what effect systems like
[https://zio.dev](https://zio.dev) support where a program constructs an
effect which is executed later by the runtime.

I've attempted building this sort of effect-based API over protocols like ReST
a few times without much success. The biggest problem isn't technical. The
hardest problem is convincing coworkers and api consumers of the value of this
approach.

