
The API wars are coming - WardPlunet
http://gigaom.com/2013/11/24/arm-yourself-the-api-wars-are-coming/
======
miguelrochefort
This is hardly new... I find odd that people are excited by triviality.

The big problem with the micro-API paradigm is the lack of standard. Sure,
they might all use REST, but the semantics of the data is highly arbitrary.
Take 10 very similar web services with REST APIs, you'll quickly realize that
they're all completely different and that you have to learn how to use them
one by one. The solution to this is to standarize data, and I believe that the
semantic web and linked data is a much superior approach.

~~~
smizell
Doesn't true REST (not the RPC kind) address these same things when
implemented? Content types and hypermedia to allow clients to know how to
access the data and follow links to linked data?

If we all built our APIs with a registered content type, it would really go a
long way for standardizing APIs. Throw in some HATEOAS and you've got
standards and links.

~~~
miguelrochefort
You're right. We're still a long way from implementing REST the way it was
intented to be. And yes, it is in theory possible to automate conversion using
hypermedia and such. However, none of that is going to happen until IDEs let
use generate local classes from hypermedia models.

There's no reason why I shouldn't be able to select an existing data model
(from some popular source, or microformats), and generate a REST endpoint in 1
click. There's no reason for anyone to manually create local classes that
mimic these models, code the logic that convert the models in and out, and do
a bunch of other manipulations using installed libraries. I just want to
select the data, plug it through third-party libraries as a service (online
APIs/functions, some of which being open source could be cached locally within
the code), and be done with it.

We need to be able to reuse everything in a seamless way. Currently, reusing
something often means more work and frustration, and most people end up coding
their own lesser clones/forks that "does the job". Sigh.

~~~
smizell
What you're describing has already been done by SOAP+WSDL. Even the term
"endpoint" is from the SOAP world, not the REST world. The IDE (usually) does
all the heavy lifting of generating and consuming of these endpoints and the
developers doesn't have to think about how endpoints are crafted.

REST is an architecture that can be implemented in many different ways, so I
doubt there will ever be a one size fits all for REST. I think the real win
will be when web frameworks allow us to work primarily with resources and
representations instead of an MVC architecture.

------
ztnewman
The author is a VP at an "integration platform" company, what a coincidence.

~~~
reasonnotreason
Just because it is in the cloud means little. We have countless APIs that can
be chained together. So? No doubt the cloud will not be a fad, but this post
is just random hype.

------
woah
What does this mean for privacy? Is the natural state of affairs for your data
to be spread out in thousands of databases run by separate little API vendors?

~~~
miguelrochefort
First of all, there's no such thing as "your data". Nobody "owns" data, it
just is. You don't own the photons your body reflect, you don't own the noise
you produce, you don't own the ripple in the pool you dive in, you don't own
the temperature of the room you heat. Data is a side effect of actions, and
some of it happens to be captured by some agents. As you grow up, you realize
that it's not about the data you "own', but it's about the data you "know".
Ownership of data is an illusion, and all that really matters about data is
knowledge.

Agents (people and/or machines) collect data to get a better idea of the
world. The more you know about the world, the better you can navigate it. It
let you make good decisions and accurate predictions. Collecting data is not
good nor bad, it's just a natural process used to reach truth.

You used to have to collect all the data you needed yourself, leading to data
duplication over distinct individuals. However, as communication improved, we
started to delegate knowledge to third-parties. You no longer need to recall
phone numbers by heart, as you can store this data in a cloud service you
trust. However, we quickly realized that being too trusting can be a mistake,
and that it is often necessary to spread data over an array of different
knowledge base in order to make sure the data is not lost. This also helps
with query speed and uptime, which is a practical bonus.

The way data will be exchanged in the future is quite obvious. Agents will
gather data, and broadcast it over the network, to make sure that it's as
available as possible. Nobody will care about where the data physically comes
from or whether it exists in multiple places. Distribution and caching will
all be done systematically for all data, and all you'll have to care about is
the data itself. Once we have a good semantic framework and naming
conventions, you'll be able to directly think about the data and completely
ignore everything else. There won't be such a thing as "Facebook" and "Amazon"
and "iTunes" and "Youtube". There will be People, Products, Music and Videos,
and whether they come from service A or service B will be irrelevant.

The "API vendors" have 3 distinct jobs. First, they can act as relay/nodes
that store/cache/distribute data. Second, they can act as functions/services
that manipulate/transform data. Third, they can act as gateways with the
physical world, acting as agents that read and write to the physical world
(I/O). In any case, they offer an actual service, not data.

People will no longer pay for data. Data is cheap, data is easy to clone, data
is easy to distribute. Therefore, data will be free and distributed without
any restriction. However, people will still pay for the infrastructure, for
the machines that serve them data on demand, for the machine that customize
their data, for the machines that watch them, listen to them, talk to them.
But data? People won't even imagine that data can or should be restricted, and
won't care at all about the source of the data, they just want it fast.

The whole privacy issue is a non-issue that will resolve itself over time.
It's not a technical issue, but a social issue. We don't have to fix privacy,
we have to fix people. If there's anything accurate about this "API war", it's
that privacy won't be a major concern. The real problem with the future
described by the article is the inferior way APIs will communicate. They still
require manual transaction, they still require the use of dirty adaptors to
let APIs talk to one-another. This human intervention is not necessary, and
semantic standards and ontologies are the key to this issue.

------
danmaz74
I'd like to understand better how these API providers are going to make money.
By directly charging the access to their API?

~~~
ismaelc
You can check out this presentation on business models for APIs
[http://www.slideshare.net/mashapeinc/the-art-of-selling-
api-...](http://www.slideshare.net/mashapeinc/the-art-of-selling-api-i-
edition) (Disclosure: I work for Mashape)

~~~
danmaz74
Thanks Ismael - only saw now your reply, but I'm very interested in this. I
already received more than one request to create an API for hashtagify.me, but
everybody disappeared when I said "money"...

------
EGreg
Um APIs have been around for a long time. Maybe cloud hosting apis are
"new"... but other than that, it ain't new.

What could really take off is an API server platform that's decentralized and
does authentication, api console, throttling, billing, etc.

------
bsenftner
That article believes the reader is a moron.

