
Ask HN: Do you use automated tools to create APIs, or do you code them manually? - highhedgehog
Do you use some sort of framework&#x2F;tool for creating the APIs needed for your product&#x2F;service&#x2F;application&#x2F;ecc, for example (https:&#x2F;&#x2F;loopback.io&#x2F;), or do you code by hand?
======
sunir
APIs are interfaces (it’s right in the name!) and should never be directly
tied to implementation because:

1\. the interfaces must remain stable to the outside world that relies on them

2\. They select what underlying resources and functionality is accessible by
outside users, and what is hidden. A lot of your internal implementation is
either a mess, “temporary”, insecure or intentionally internal.

3\. They control access to the internal application through authentication,
authorization, security, and translating data in both directions.

4\. When the internal representation changes, they map the new implementation
to the old interface to ensure the system remains reliable to API consumers.

5\. They offer migration paths when change is necessary

That being said...

Auto API generators are really useful for internal systems where you control
the underlying system, the API, and all systems relying on the API.

They are also useful to build an initial API that you plan to fork.

~~~
013a
Yeah I agree with your stance, but not the conclusion. The way that gRPC (and
many other systems) handle this is beautiful and the way all APIs should be
built: your API is a specification, not code, so you start with the spec
(SDL), then generate the adapters your implementation needs to plug into it.

This helps elevate changes to the API itself; you can easily write automated
systems which detect changes to the specific SDL files. Or, the way companies
like Namely [1] do it, keep those SDLs inside a separate repo, then publish
the adapter libraries on private npm/etc to be consumed by your
implementation.

[1] [https://medium.com/namely-labs/how-we-build-grpc-services-
at...](https://medium.com/namely-labs/how-we-build-grpc-services-at-
namely-52a3ae9e7c35)

~~~
remote_phone
This has been around since the early 90s or previous. ONC-RPCs did this where
you define the interface file and it generates the client and server stubs for
you.

NFS is based on this, as with other services. Conceptually it’s exactly the
same, with some underlying differences.

~~~
majewsky
Going even further back in history, ASN.1 is also like this. It's a
description language for data structures, and there are separate
representations that can be derived from them. It's sort of like JSON, JSON
Schema and Protobuf in one.

TLS certs are encoded in ASN.1 DER, for instance, and LDAP messages are
encoded in ASN.1 BER.

------
bluGill
This is not an XOR question. Both are valid for different APIs. I start with
problem and design the right solution to the problem.

Often I have a simple problem where I can write a simple clean API quickly by
hand. Generation is a negative, generated APIs tend to be complex and hard for
the user to read.

Sometimes my requirements need something that a tool does better. For example
protobuf gives me an efficient over the wire API that can be used in multiple
languages: I'll let protobuf generate those APIs as I can't do better by hand
(though we can debate which tool is better for ages).

Sometimes I have a complex situation where I'll write my own generator. For
example I once made a unit system generator for C++: it was able to multiply
light-years by seconds and convert to miles/fortnight - no way would a
handwritten API support all the code needed for that but with generation it
was automatic (why you would want to do the above is an exercise for the
reader). The API was easier to understand than boost's unit system (APIs are
about compromises so I won't claim mine is better)

------
Just1689
In a few projects where we had a specific experiment we needed insights from
we ran PostGrest[1]

Basically you create your tables and run PostGrest. Bam! You have an http
interface / api for your database. We would then create light wrappers around
those that took on specific responsibilities - security, audit etc. The
wrapped apis is what we exposed publicly.

This may not sound all that helpful but it made the bit we implemented
unbelievably tiny. As a plus, we found that a Java application that exposes an
endpoint and calls an endpoint is fast to start / stop because it doesn't mess
around with DB connection pools.

[1] [http://postgrest.org/en/v5.2/](http://postgrest.org/en/v5.2/)

~~~
royjacobs
How do you handle versioning? If you add a new required field to one of the
tables (perhaps even a field that doesn't have a default value), how do you
make sure the consumers of your old API keep working?

~~~
steve-chavez
You can handle versioning with PostgreSQL schemas. You can have v1, v2, etc.
These usually contain views and stored procedures.

------
bpizzi
At work (enterprise stuff) we've grown tired of duplicating 1000's lines of
boring CRUD stuff and turned to code generation. Which is so much better.

The workflow now is:

\- think really hard for 10 minutes about the business problem,

\- describe it into our meta language (typed structs, UML-like, really
simple),

\- instantly click'n'build a whole set of API endpoints down to SQL
create/alter/drop statements, along with full up to date documentation,

\- get excited to be able to deliver so much stuff to customer in no time,

\- aaand finally receive a requirement update ('the last one I promise') and
send I-love-you letters back in time to our old-selves for such a nice
malleable framework (which I dubbed The Platform).

------
pier25
For the last years we've done everything by hand using Node or Go.

If I was to start a new API today I'd use Hasura. It automatically creates a
GraphQL schema/API from a Postgres database. It's an amazing tool.

[https://hasura.io/](https://hasura.io/)

~~~
llamataboot
Can you do a full write-up about this tool?

Looks really interesting to easily layer a graphQL API on top of a Rails app
with a few serverless functions...

~~~
pier25
A "full write up" seems a bit intimidating... :)

I'll expand a bit on my previous comment.

So the idea is that Hasura is a stateless layer on top of Postgres that
generates all the necessary GraphQL schema/queries/mutations/real time
subscriptions for doing CRUD based on the Postgres schema. If you change the
tables (either via the Hasura admin or some migration system) it all adapts
automatically as you'd expect. It can use a remote Postgres DB, you don't need
to run the API and DB in the same machine.

Performance is fantastic. Hasura is very efficient in terms of speed and
memory consumption. Even with a free Heroku dyno you should get thousands of
reqs/s.

On top of direct data from tables you can also read Postgres views.
Essentially you can read a custom SQL query from GraphQL.

Hasura can also integrate external GraphQL schemas via a mechanism it calls
"stitching". The idea is that you can point remote GraphQL schemas to Hasura
(on top of the current one from Postgres) and it will serve as a gateway of
sorts between all your GraphQL clients and servers.

Hasura does not include authentication, but it's very easy to integrate with
your current system or with services like Auth0 via JWT.

Hasura also includes a powerful fine grained role-based authorization system.

Whenever anything happens you can configure Hasura to call a URL (webhook) to
do something. Maybe a REST endpoint or a cloud function. This is usually the
way to integrate server side logic.

The only problem we've found is integrating Hasura with our current
authorization system. Our users have multiple roles and we have no way of
deciding which is the current role. Hasura requires a single role to be passed
to its authorization system on the request headers. This is something that is
being worked on AFAIK.

Their youtube channel has lots of little videos showcasing all the
functionality.

[https://www.youtube.com/channel/UCZo1ciR8pZvdD3Wxp9aSNhQ/vid...](https://www.youtube.com/channel/UCZo1ciR8pZvdD3Wxp9aSNhQ/videos)

------
DSotnikov
We use OpenAPI to define APIs. There is an extension with new template
generation, intellisense, snippets, etc for VS Code:
[https://marketplace.visualstudio.com/items?itemName=42Crunch...](https://marketplace.visualstudio.com/items?itemName=42Crunch.vscode-
openapi)

------
escanda
Nobody remembers anymore SOAP it seems haha It's funny but all those new
documentation and code generators for Rest were largely invented in SOAP
messages before.

It doesn't make sense to send SOAP messages to browsers but I cringe every
time I find myself with a vaguely documented Rest API when integrating
systems.

~~~
hacker_123
I similarly cringe at vaguely documented APIs, but being a young developer, my
experience with REST has been better. For instance, I've consumed a SOAP API
where the WSDL specification was primarily a method named "Magic" that
accepted a string "Method" and six string-typed parameters, "Parameter1"
through "Parameter6".

I think the key is to pick a documentation tool that the team will actually
use.

------
t0astbread
Disclaimer: This is not based on real world knowledge. (To be honest I have
practically no "real world knowledge".)

That being said, I just finished a school project where we (our class) were
divided into small teams and we had to implement small RESTful web apps. My
team chose to kick it off by grabbing two people from the front- and backend
team and writing an API specification by hand. It was a breeze and we were
done in a few hours. After that front- and backend (almost) never had to
interact with each other again until the end of the project where we had to
stick the two things together.

This probably isn't applicable to real-world cases where the requirements are
ever-changing and everyone's a full-stack dev (or you don't have a team at
all) but I found this sort of separation quite useful for this project. (It
kept team sizes managable, different kinds of devs were in seperate teams, we
didn't have to wrestle with any tooling that would halt the whole project.)

I see no problem with generating client/server boilerplate from spec though
(like Swagger does, I think).

~~~
t0astbread
This sort of philosophy could be useful when designing a public-facing API
though. In that case you need a well-formed implementation-unaware API
documentation and mapping it out upfront by hand could save you lots of
trouble.

------
streetcat1
Use grpc. With one definition file you can generate:

1) Client code in various langs. 2) Server code in golang of python or nodejs.
3) Swagger. 4) Rest interface if you want to. 5) Gorm definition of you use
golang gorm.

------
vyshane
We've been using gRPC and Protocol Buffers for the last couple of years. We
write APIs using the Protobuf interface definition language, then generate
client libraries and server side interfaces. Then it's a matter of
implementing the server by filling in the blanks.

~~~
kminehart
I love protobuf for this reason. Personally I've opted for Twirp instead of
gRPC, as gRPC has a lot of baggage, and streaming is really not necessary for
me.

We've had to drop-in-replace, or add a validation or access layer service for
something, and using protobuf has made this super easy. Anything interacting
with that service is none the wiser.

~~~
vyshane
gRPC has been solid for us on the JVM, and streaming has been great when
consuming from Apache Flink jobs, integrating with message queues, receiving
push notifications and so on. For async work it's useful to have more than
just request/response.

I've been playing the FoundationDB Record Layer for a personal project of
mine, and with this setup I can generate not only the API implementation, but
also the models used by the persistence layer:

Protobuf (Messages) -> gRPC -> Scala/Monix -> Protobuf (Models) ->
FoundationDB

~~~
praneshp
> Protobuf (Models)

Sounds really cool! Is this something that comes out of the box or generated
by your own plugins?

~~~
vyshane
FoundationDB Record Layer uses protocol buffers out of the box. They leverage
the fact that you can evolve protobuf messages in a sane way. That's their
equivalent of doing database schema migrations.

------
fimdomeio
(very very small team) We have some handmade scripts in place to generate
basic crud endpoints, generated files are then adjusted to the specific needs,
but it goes a long way in keeping things organized and consistent with very
little effort.

------
cwilby
In my case I like the end product to be code. I use snippets/generators to
create components (models/controllers/middleware) then modify as needed.

Having used loopback before, it's a quick way to get an api up and running, I
personally struggle with injecting logic into endpoints/writing custom
endpoints.

If the code's "all there", I know where to look. If I have to intercept hooks
it adds an extra layer when searching.

Summary, loopback has been great for creating APIs where all I care about is
crud, but for larger projects I stick with snippets/generators so I can extend
easier later.

------
steve_taylor
Lately, I’ve been getting back into Spring Boot. Spring Data REST automates a
lot of the CRUD endpoints, with easy enough configuration and customization.
I’ve been declaratively securing it all with Spring Security.

~~~
vbsteven
I prefer to code one level deeper and I mostly use plain Spring MVC
controllers. That way I can still have spring security for the endpoints but
it keeps the endpoints more decoupled from the repositories.

I typically have a repository generated by Spring Data, a small service layer
with business logic on top of those and then an MVC controller that only talks
to the service layer, never the repositories.

Each controller also has its own DTO class(es) for request bodies and
responses and a small converter between DTO and entity. Kotlin extension
methods make it easy to add the toDto() method onto the entity so a typical
controller will fetch the entity from the service and return entity.toDto().

Kotlin, Spring Boot and Spring Data are amazingly well suited for this.

~~~
steve_taylor
I was doing things manually to, even security!

You don't really need DTOs because you can use projections and set a default
projection to be used when that entity type is returned in a collection. Any
entity fields that should never be exposed can be annotated with @JsonIgnore.
And then if you need endpoints that aren't CRUD, you can build those the usual
way.

~~~
vbsteven
I’ll check out the projections as they seem interesting and I don’t know them
very well.

------
meddlepal
For personal projects I'll hand code them (usually) because I like thinking
about API design and API UX.

For professional stuff... it really depends. I like GRPC but codegen needs
team buy-in... It can quickly make a fast development loop hurt if done
poorly. Doubly so if IDEs are involved for some users and the IDE is
constantly updating it's caches of types and interfaces. I've just seen it
turn into a hot frustrating mess very quickly.

------
fwouts
We tried writing OpenAPI docs to implement a contract-first development
workflow, with the idea that backend & frontend/mobile engineers would agree
on the API interface by discussing OpenAPI changes in a pull request, and only
then start implementing it (on the backend side) and using it (on the client
side).

This didn't pan out well, because it turns out OpenAPI isn't very easy to
read, especially when you're reviewing a diff in a pull request. We didn't get
the engagement we were looking for in pull requests.

We've since invested in building a simpler, human-friendly API description
language based on TypeScript, which exports to OpenAPI 3. It's still early,
but we've got a lot of positive feedback and quick adoption across the company
(50 engineers).

You can check it out at
[https://github.com/airtasker/spot](https://github.com/airtasker/spot). Feel
free to send us feedback in GitHub issues or replying to this comment :)

------
citrusx
I prefer to write them by hand. Most APIs, to start, don't have a lot to them.
They tend to grow in scope over time. So, it's pretty easy to just throw
together your initial idea, and incrementally grow it from there.

I might think differently if confronted with a huge API surface area to build
off the bat, but I haven't run into that yet.

------
ChrisMarshallNY
Manually. However, I have had the luxury of implementing relatively small
APIs. If I was doing something like the Google APIs, I'd probably consider
automation. That said, I'd probably want to write the automation, myself, as
I'm an inveterate control freak.

------
avinium
For HTTP APIs, I'm a full convert to OpenAPI - write your API document by
hand, then code-gen the client/server stubs.

It requires a small investment upfront, but will pay huge dividends once your
project is rolling. You have a single source of truth for publicly exposed
endpoints and model descriptions (your API document), and you can instantly
regenerate certain key components (e.g. model binding, new routes, etc)
whenever that document changes.

I actually contributed the F#/Giraffe generator to the OpenAPI generator
project, which you can find at [https://github.com/OpenAPITools/openapi-
generator](https://github.com/OpenAPITools/openapi-generator)

------
abetlen
Yeah a couple years ago we switched from using vanilla Flask to Connexion
which lets you describe your API through the an OpenAPI spec. Connexion
handles routing and request validation and our developers can just import the
yaml into Postman for testing as well as use Redoc for generating pretty
documentation sites. Overall the biggest pain point as others have mentioned
is writing and maintaining the spec. OpenApi's structure can take some time to
get used to and maintaining the whole API in one file is a little tough, but
it's not unmanageable with code folding and good schema definitions.

------
andreasklinger
Imho it matters less.

What's important is that you have rigorous testing around your API.

APIs are essentially external contracts people build against. You don't want
to break this contract.

make sure it

\- never changes unless you know about it

\- updates the documentation whenever it changes

------
mschuster91
I used Silex for a long time and when it got deprecated moved over to
Symfony's MicroKernel
([https://symfony.com/doc/current/configuration/micro_kernel_t...](https://symfony.com/doc/current/configuration/micro_kernel_trait.html)).
Tiny enough to get started in a matter of minutes and when your project grows
bigger then you can easily refactor either the whole project or just parts of
it to "standard" Symfony architecture.

------
SergeAx
I've tried two approaches.

1) write code, generate Swagger/OpenAPI from it. Works pretty good with big
frameworks like Spring for Java or Symfony for PHP. Drawback: it is too easy
to change API, tends to broke BC too often.

2) write Swagger/OpenAPI, generate code stubs from it. Works good enough with
Go and TypeScript. Tends to keep client-server contracts stable. Drawback:
server code is overly complicated, needs extra layer of DTO to convert from
domain terms to API models.

edit: 2nd approach also good for autotesting.

------
bestouff
I'm using a custom protocol on top of MQTT. I have a big CSV file with all the
topics/payload types/etc. specified which is then use to generate a common
library for our software services. Thanks to Rust's nice code generation
capabilities, I have several types (many enums) which automatically
serialize/deserialize from/to MQTT messages, checks included. Really cute.

~~~
jph
Nice! Can you say more about how you're able to do the Rust aspects of code
gen and checks?

------
znpy
It’s interesting that no one mentioned CORBA... if anyone has success/horror
stories to share about corba, i’d gladly listen.

------
pavelevst
I write API code manually and use testing tool that also generates openapi
file and save it in git. This makes API docs always up-to-date and history of
changes in actual API via git. (stack: rails, rspec and some gem for openapi)

------
scardine
I use Django REST Framework which may be or may be not an automated tool
depending on the definition you are using - but DRF makes API's very
declarative and I love it (batteries included).

------
graycat
Okay, I understand some APIs:

(i) TCP/IP

(ii) HTTP

(iii) ASN.1

(iv) SQL

(v) The key-value session state store I wrote for my Web site (cheap, simple,
quick, dirty version of Redis).

Etc.

Now, how can the design and programming of such APIs be "automated"????

------
bjacobt
I use feathers [1] and like it a lot.

[1] [https://feathersjs.com/](https://feathersjs.com/)

------
kkarakk
most languages have a library that takes a json structure from a file and
creates an AP. for eg json-server on node.js, I just use that initially until
the "need" for the db becomes clear ie what data do I need to interact with.
After that it's custom all the way - it's more malleable I find

------
Mayeul
Yes, I use the springfox impl for Java for both server api and client api
(mainly for automated tests)

------
brianzelip
What’s an example of automating API endpoints in Node.js? I always just whip
up an Express.js MVC by hand.

------
llamataboot
I code them by hand, but I like to automate as much of the documentation
generation as possible...

------
wheelerwj
it depends on the stage of the project i think.

I think early stage and MVP projects are almost always written by hand.

~~~
clavalle
I feel the opposite: automated tools are really useful for smallish POC type
things -- MVPs and early stage work, but fail when things reach a certain
level of complexity.

