
The Future of APIs: APIs aren’t the endgame they won't stay forever - zdne
https://blog.goodapi.co/future-of-apis-c84a76bc9c85
======
ekidd
I got about halfway down, and it suddenly started sounding like the Semantic
Web reincarnated as an API service. This idea crops up about once every 10
years (in some form or another), and it runs into fairly predictable problems.
There are two good essays that I recommend that people read before getting too
excited about how well machines can interoperate without humans in the loop.

Shirky's "The Semantic Web, Syllogism, and Worldview"
[http://www.shirky.com/writings/herecomeseverybody/semantic_s...](http://www.shirky.com/writings/herecomeseverybody/semantic_syllogism.html)

Doctorow's "Metacrap: Putting the torch to seven straw-men of the meta-utopia"
[http://www.well.com/~doctorow/metacrap.htm](http://www.well.com/~doctorow/metacrap.htm)

Doctorow talks about problems with metadata, but these problems might apply
equally to APIs and the API vocabulary discussed in the article. Specifically:

 _2.1 People lie 2.2 People are lazy 2.3 People are stupid 2.4 Mission:
Impossible -- know thyself 2.5 Schemas aren 't neutral 2.6 Metrics influence
results 2.7 There's more than one way to describe something_

The fundamental problems are that (1) getting people to agree on things is a
surprisingly difficult and political problem that can never be solved once and
for all, and (2) people have incentives to lie. If you invent a generalized
way to look up _any_ weather forcasting API, somebody is going to realize that
they can make money gaming the system somehow. PayPal is really in the
business of fraud detection, and Google is in the business of fighting against
blackhat SEO (and click fraud).

So take your automated API discovery utopia, and explain to me what happens
when blackhats try to game the system and pollute your vocabulary for profit.
Tell me what will happen when 6 vendors implement an API vocabulary, but none
of them quite agree on the corner cases. This is the hard part.

------
falcolas
Here's the part I find most humorous. Machine to Machine communication has
already been attempted, and it always fell back to requiring human
intervention. Web Services Description Language (WSDL) was an attempt to
exactly this, and it failed.

WSDL didn't fail because it frequently tried to describe SOAP connectivity
(and everyone knows that SOAP is _obviously_ bad for _all_ things), it failed
because it was still people writing the APIs and the descriptions of those
APIs. And since people aren't perfect, humans had to intervene to find and fix
the bugs to properly communicate with a WSDL defined API.

Until AI gets good enough, or we adopt a specific definition to meet all use
cases (should be fun to watch), such attempts are going to keep failing.
Because it's humans hiding in the box of the turk, and will be for the
forseeable future, and computers are still pretty terrible at communicating
with humans.

~~~
tomc1985
The whole point of WSDL I thought was to look at statically-typed code and be
able to dynamically generate the XML schema (and the XML) without human
intervention -- that you can look at the code, know the types it uses and how
it uses them, and then generate the WSDL based on that information?

~~~
kogir
.NET and Java had good support for statically generating them (correctly
even!) from code, but I never had the pleasure of integrating with such a
service. Even in that case though, there were developers who used strings for
everything instead of the correct types, didn't handle null correctly, etc.

Overwhelmingly in practice people wrote wsdls and xsds by hand, incorrectly,
and treated them as secrets instead of publishing them along with the API
endpoints. There was other "enterprise" BS during the XML craze too. In one
case I even had to work around bugs in an IBM XML firewall, or its
configuration. It was a nightmare[1].

[1]
[https://en.wikipedia.org/wiki/XML_appliance](https://en.wikipedia.org/wiki/XML_appliance)

~~~
mattmanser
They were a bit of a nightmare. Fun highlights:

    
    
        - IsBooleanPropertyIncludedBoolean
        - Massive, incredibly slow to build libraries
        - Check everything in case it was null
        - Never really sure what was wrong when it went wrong
    

There was quite a massive cognitive overhead and overkill for often querying
one little thing.

Also they'd sometimes break and you'd have to hand edit the wsdl's to get them
to work again (at least salesforce used to break theirs every now and again,
the wsdl would be incompatible with the .Net tool because of certain character
not escaping properly).

~~~
tomc1985
Well, when correctly implemented, working with WSDLs can be almost pleasant

------
niftich
There's some very good info here towards the end, but the first half of the
blog post made me wonder if they're going to get to it.

Perhaps this is just a function that it's a marketing post designed to
simultaneously appeal to different audiences while explaining the problem to
decision makers that are unfamiliar with what problem they're trying to solve.
I sympathize, but as a designer acutely familiar with problems around API
discovery, the first half was an extremely cringey read.

Anyway, you quoted all the right sources (save for Tim Berners-Lee's Semantic
Web and Giant Global Graph), and I wish you much luck, but I think you're
aware that this was tried before [1][2], where much less human interaction and
intervention was required, and it nonetheless faltered. "Complexity" was a
scapegoat at the time, and I think that's an unsatisfactory, almost too
convenient of an answer, so how do you avoid that same fate?

[1]
[https://en.wikipedia.org/wiki/Web_Services_Discovery#Univers...](https://en.wikipedia.org/wiki/Web_Services_Discovery#Universal_Description_Discovery_and_Integration)
[2]
[https://en.wikipedia.org/wiki/Web_Services_Description_Langu...](https://en.wikipedia.org/wiki/Web_Services_Description_Language)

~~~
zdne
Thanks for the review! I didn't mean the article as a marketing post, but I
wanted to share my (long) thought process.

Nothing in the article is new in the concept, but maybe™ the time is now
right. Frankly, what the part I'm concerned about isn't the semantics sharing
at runtime, but it's the de-coupled, declarative approach in writing the
clients.

With hypermedia, we've failed at the gates of client development. The devs
tend to tight-couple their code with APIs, ignoring the consequences. If there
won't be an incentive on client's side, then nothing from the article will
matter.

~~~
CodeWriter23
I stopped reading at "Aliens". The Turk explanation was so patronizing IMO it
reduced my tolerance for any other apparent nonsense not related to the topic
at hand.

~~~
sjayasinghe
Do we really?

------
romaniv
As someone who worked on a whole lot of integration projects in the recent
past, I think network APIs can be "solved", but not by the means this article
describes.

JSON-LD looks like a reimplementation of something that was already done by
XML, XML Schemas and WSDL. If several technologies that were _designed_ to be
semantic failed at automating network API integration, why would you think a
sub-format for JSON will succeed? What does it do differently?

To really solve the problem of service integration we need to rethink our
approach to "services" altogether.

One solution that I see would involve a global registry of semantic symbols
(e.g. "temperature", "location", "time") and a constraint solver. So yeah,
distributed Prolog on the global scale. Systems would exchange constraints
until they reach a mutually agreeable solution or fail. Then the derivation
tree would be used to _generate_ a suitable protocol. While I think this is
possible, I don't think there is any real interest in stuff like this right
now.

~~~
patkai
"distributed Prolog on the global scale. Systems would exchange constraints
until they reach a mutually agreeable solution" \- wow, I'm impressed, but not
sure I fully get this. Any pointers or writeups on this or something similar?

~~~
romaniv
I simply verbalized something that I think is imminently possible. Prolog
operates on rules and facts, both of which involve symbols. If you fix the
meaning of some symbols globally, you can run queries that will (potentially)
derive semantically meaningful information. There is nothing stopping two
systems from exchanging rules. If you keep track of which symbol belongs to
which machine (or the global repository) you can do distributed derivation by
querying the other machine when you run into something global that you don't
know or into other machine's symbols you can't resolve/unify yourself.

This is not a bullet proof concept. There can be infinite loops and it can
have really bad performance in some cases. But at least it's something that
would be able to do simple corrections and lookups automatically. (E.g. you
need temperature in Celsius, but the server stores it Fahrenheit. This is not
something that should force you to write code, because we know what
temperature means and there are globally available conversions.)

...

Here is something related:
[https://fenix.tecnico.ulisboa.pt/downloadFile/395145629051/e...](https://fenix.tecnico.ulisboa.pt/downloadFile/395145629051/extendedAbstract.pdf)

~~~
romaniv
Also:
[https://www7.in.tum.de/tools/dahl/iclp2010.pdf](https://www7.in.tum.de/tools/dahl/iclp2010.pdf)

------
aargh_aargh
There's an economical problem with the proposed direction of development.

API providers are typically businesses or other actors whose interest is to
lock the API clients nto their service. What would they gain by making their
API interoperable with their competitors?

That is only viable for newcomers into the type of service and only works if
they clone the API of an established player, not improve on it and standardize
it.

~~~
zdne
Fair point! I'll try to answer with a question: So why is it that Google,
Microsoft & Yahoo cooperate on schema.org to establish shared vocabulary?

They don't have to make it interoperable per se. It'd be enough to use some
terms from a shared vocabulary (user, account, address) and then have some
business-specific terms.

This way the business can use an existing library that knows how to handle
user profiles. It's not that the full client has to be generic, a UI component
that knows how to present a portion of a dictionary is enough.

~~~
goblin89
> So why is it that Google, Microsoft & Yahoo cooperate on schema.org to
> establish shared vocabulary?

Reduced differentiation between underlying services drives them from product
world into commodity world. Lower margins and stronger competition at that
level certainly benefit the big players. Services in question, maybe not so
much.

------
akytt
This would be all good, if the goal was to have computers talk to other
computers. In real life, typically organisations start talking to other
organisations and _maybe_ there will be computers involved eventually. Most of
the integration complexity is building a technical and functional clutch so
two organisations can talk but do not leak too much (dynamic) complexity
across their boundaries. And that does not lend itself well to automation.

~~~
icebraining
_In real life, typically organisations start talking to other organisations
and _maybe_ there will be computers involved eventually._

Do they? I work for a company that does a fair share of M2M integrations, yet
we almost never talk to the other organization, we merely use their APIs. I
don't know what's typical, but our case certainly isn't rare.

------
nathanaldensr
Hold up an object in front of three people. Ask all three to describe the
object and what it can do. You'll get three different answers.

The fundamental difficulty with APIs is that they force clients and servers to
use the same domain model. Humans are necessary because only humans have the
intelligence to reconcile differences in domain models.

The idea of inventing some kind of discovery language is just deferring the
difficult work of reconciliation. Computers will need to be as good at
induction as the human brain before it will be possible to eliminate humans
from the process.

~~~
zdne
The thing is, why APIs aren't a mix of domain vocabularies? Why the isolated
silos with no interlinking?

The Web grew strong because sites were interlinked. REST APIs are supposed to
be generalized Web and yet they are missing the links! What Went Wrong?

~~~
nathanaldensr
The Web grew strong because of the combination of hypermedia and humans. When
they look at a hypermedia document like HTML, humans can easily infer context,
relevance, value, and a host of other hard-to-mathematically-define things.

I'm not sure REST APIs are supposed to be a generalized Web. In my view, REST
APIs are simply a loose protocol on top of HTTP that allow computers running
human-written code to communicate with one another. In general, there are
several human-driven processes that must occur before REST APIs have any
value. Example human involvement: Is this REST API valuable to my business?
How much does this REST API cost to use? Does their domain model match ours
enough to extract value from integrating with their REST API?

------
JCzynski
I am extremely skeptical that autonomous APIs are possible without 90% of full
natural language processing. Whatever we do to make APIs have their purposes
be self-documenting, there will still be inferential gaps between what we say
explicitly and what we mean.

We could adopt a highly rigid language describing what it is that an API
provides and what purposes that data is useful for, but that's restrictive and
very brittle, _especially_ against Silicon Valley's favorite activity of
disrupting established ways of doing things. Like going from proofs in ZFC to
proofs in first-order logic, we can make it more stable but only at the cost
of losing lots of power and expressiveness.

------
mcphage
I'm not sure this goal is very practical, even in the toy example you used
(being able to swap data sources for weather forecasts).

If you can use a common vocabulary to access multiple APIs, that requires that
all APIs implement the same feature set. Which means getting the API sources
to agree on the features to implement, and how to describe them, and stop them
from adding any features on that the others don't have. But of course, they'll
all be motivated to add their own features, to distinguish themselves from
their competition.

And once a API consumer is using a feature that other API producers don't
support, then the consumer is locked into that producer, and the whole shared
vocabulary is for naught. And of course the API consumers will be looking for
additional features, because those translate into features that they can offer
to _their_ customers.

Basically, this requires API producers to work together to hobble their
ability to meet their customers' needs, all to make it easier for their
customers to drop them for a competing endpoint. So it looks like a net
negative for everybody.

------
Animats
APIs usually create a master/slave relationship. The strong party gets to
define the API, and the weak party has to adapt to it. There are few fully
symmetrical APIs.

Usually the seller defines the API, but where the buyer is more powerful, the
buyer sometimes does. See, for example, General Motors' purchasing system for
suppliers. WalMart has something similar. There, the seller must adapt to the
buyer's system.

There are a few systems where there are interchange standards good enough to
allow new parties to communicate as peers without a new implementation. ARINC
does this for the aviation industry.

We have yet to develop systems where both sides enter into communication and
figure out how to talk. This is needed. XML schemas were supposed to help with
that, but nobody used them that way.

------
partycoder
\- CORBA has service discovery and interface definitions.

\- SOAP has service discovery and interface definitions.

\- SOA has service discovery and interface definitions.

Some of these are like over 20 years old. They also included many other
features. I would not describe this as being "the future".

~~~
preordained
That, and I think this is reflective of a general trend where whatever we have
now is wrong because X is the future. Oh noes, are you really going to build
it that way?! But you'll be left behind! I don't think attempting to be a
technology prognosticator is good engineering--or a good strategy for most
anything.

------
teilo
Can we avoid replicating the tarpit that is WSDL for RESTful services? Time
will tell, but I have my doubts.

~~~
tomc1985
Tarpit or no, WSDLs are pretty complete interface documentation.

It's still weird to me that people see APIs as a "thing", worthy of attention
from business types and other nontechnical money-men, And how is it that a
"good API" requires client modules for all these different languages? Do
people not know how to make HTTP requests anymore or something?

(On the client note: "But it makes integration easier" or "I'm lazy, screw
your docs, gimme [gems|pips|npms]" is the usual response I hear, though, given
that each of these calls represents an external dependency requirement, maybe
its good that they aren't as easy to use as the rest of your language?)

~~~
EdSharkey
> And how is it that a "good API" requires client modules for all these
> different languages? Do people not know how to make HTTP requests anymore or
> something?

I look at a service like a plate of hors d'oeuvres across the room on a fancy
table.

A client module (aka SDK) is the silver platter with the hors d'oeuvres neatly
lined up for your selection, brought to you by the handsome waiter who is the
only other person in the room who speaks your language.

One could walk to the table, and hardcore partygoers do, but lazy, entitled
sacks like me prefer the delights to be hand-delivered. ;)

~~~
tomc1985
Til you realized those hand-delivered delights aren't particularly well made,
and then grumble to yourself about how you coulda made them way better and
faster in less time than them, maybe even less time than it would take to eat
the food on your plate :P

~~~
EdSharkey
Yes, the promised 'delights' are too often frozen plainwrap food from
Albertsons, barely defrosted in a dirty industrial microwave.

------
tbirrell
I think M2M is great and all, but the thing is, the machines are doing
something for the humans. We can tell a machine to go talk to another machine
but neither machine will know WHAT they are supposed to do unless we tell
them. The HOW is certainly something that can be solved with time and
uniformity, but the WHAT is always something that will require a human
presence. And ultimately, with all the humans in the world needing different
WHATs, I don't see the required uniformity ever coming to be on any scale
larger than the local central authority.

~~~
icebraining
The WHAT is provided by the users when they interact with some machine.

------
urvader
The only prediction I have regarding autonomous API:s is: neuronstreams. If an
API exposes a set of input channels and output channels where data can be sent
between neural nets, they will be able to adapt to whatever format they see
fit without having to make sense to us humans.

It is a scary thought but if we want to remove humans in this process we
shouldn't even be able to understand the communication.

~~~
jbpetersen
Seems like you'd want to just have the users running the neural nets to avoid
issues of either a single neural net being trained by all the users and
converging to a single standard (still quite interesting but not a solution to
the problem at hand) or having to run a separate instance for each individual
user to train as desired.

Thoughts?

------
real-v
(first post)

Howdy,

This thread is making me consider dusting off a compiler that I wrote for a
language that I created for designing APIs. That’s because I strongly agree
that lack of versioning in many client/server architectures makes it difficult
for devs to evolve their codebases. So, in this language I designed, the
versioning of changes is a core concept.

When a server offers an API which can potentially deal with different types of
clients, or with clients that need stability, then versioning is a must to
have a chance for a sane codebase. Versioning allows the natural evolution of
the API, while maintaining compatibility with existing clients.

Out of curiosity, if I were to bring the codebase up to date (C++), and make
it downloadable, installable, and usable for free/open-source, maybe for Linux
and Windows, would anyone be interested in contributing to a kickstarted for
that?

regards,

Vlad

------
smaddox
It's not enough to just have API's published unidirectionally, if you want the
system to evolve into something optimally fit for a particular job.

Think of layers in a convolutional neural network, for example. Each layer of
neural units provides information to the next layer, but fixing the output of
the higher layers limits the trainability and ultimate accuracy of the trained
network. In order to maximize fitness, full backpropagation (or similar) is
needed, with all layers being trained.

What's needed for self-negotiated API's is a generalization of the CNN model
(or similar) into a variable-length serial communication format. Humans would
define a fitness function either explicitly or implicitly by interacting with
the system, and the self-negotiating API system would use some many-parameter
optimization algorithm to alter both the Server and Client(s) and maximize the
total fitness.

------
codingmyway
Since learning it I've thought that REST is really aimed at humans. It's all
well being able to navigate state but unless it knows what to do with a given
state beforehand it's not much use to a machine, other than perhaps to gather
data, in which case that data won't mean anything unless an intelligence
interprets it.

Since written code will only ever do a defined set of operations most web Apis
are fine being written in RPC style.

The only time I've advocated a RESTful approach is when there was a lot of
public data being exposed and human developers may well have explored the
data.

When an AI can navigate an interpret data that it hasn't seen before then
things will get interesting

------
haddr
Unless somebody does it well, and it catches, we are going to repeat again and
again reinventing UDDI, semantic web services and other past intents...

------
kpil
It's sad that nothing has been learned from the past 40 years , so instead of
building on the good parts of say ASN.1, amateurish protocols like json is
invented, solving absolutely nothing of the hard bits, while improving some
superficial readability problems.

------
MrQuincle
I think it's difficult to realize what is gonna be solved by AGI
([https://www.wikiwand.com/en/Artificial_general_intelligence](https://www.wikiwand.com/en/Artificial_general_intelligence))
and what not.

Regarding APIs that understand each other, it might very well be too much in
the direction of AGI.

That means that if we want to solve this we have to look at research that
allow AIs to understand each other.

* Imitation learning

* Language grounding

The latter is also known more abstractly as the symbol grounding problem
([https://www.wikiwand.com/en/Symbol_grounding_problem](https://www.wikiwand.com/en/Symbol_grounding_problem))
and led to many debates in history. A collection of APIs seems useful, have
them interact with each other - by getting the human out of the loop - might
be a lofty but unattainable goal.

------
AznHisoka
why is Google search lousy for API discovery? If I search for email sending
Api, I get results for MailJet and SendGrid - relevant ones. If I search for
entity identification api, I get AlchemyApi back - also relevant.

~~~
zdne
The results are relevant only if 1. the name of API contains the term or
affordance you want to use 2. you are a human being (you cannot perform such
discovery as a machine)

------
nawitus
I think "using APIs" is AI-complete, thus making "autonomous APIs" a pipe
dream until we have an AGI, and at that point APIs are not that interesting
anymore.

~~~
zdne
Yup, AGI should be able to "figure it out" at the runtime (when the two
machines meet).

------
mondays
This is my first time reading about JSON-LD, but it just sounds like a
translation layer to map someone else's keys into my keys (or vice versa).
Does this really get me a whole lot?

It seems like protocol buffers was made to solve a lot of the problems brought
up here. GraphQL types and schemas seem to go a long way towards this as well.

