Hacker News new | past | comments | ask | show | jobs | submit login
A Visual Guide to What's New in Swagger 3.0 (readme.io)
188 points by gkoberger on March 21, 2017 | hide | past | favorite | 86 comments



(Note: I'm the author, but have nothing to do with the Open API Initiative)

Swagger 2 (current version) got really popular the past few months as a way to document your API. Now, Swagger 3 (er, Open API Spec 3 as it's now known) is about to launch. I had a hard time finding what was new, so we made an example-filled guide that will help with your migrations.

tl;dr: Swagger 3 isn't ready for use yet, and is way more complex but solves a lot of problems with 2.


Hey Gregory, thanks for the article. Would it be possible to change your title to OpenAPI 3 and not Swagger 3.0? Swagger is the name of the tooling produced by Smartbear that supports OpenAPI and they just released new versions of their Swagger tooling, but it's not the same thing as OpenAPI 3.0. Yes I know it is confusing :-)


I second this, I was confused too.


The idea of a standardized API documentation format is great. However, the community that surrounds it can't mature with this amount of churn in the standard. Who is going to write a language-specific client generator if the work is obsolete a few weeks later? It would be interesting to know what was so wrong with Swagger2 that we need a new standard.


Swagger Codegen [1] is a free, open source code generator that works with Swagger 1.2, 2.0 and supports 30+ API clients, 20+ server stubs and API documentations:

- API clients: ActionScript, Bash, C# (.net 2.0, 4.0 or later), C++ (cpprest, Qt5, Tizen), Clojure, Dart, Elixir, Go, Groovy, Haskell, Java (Jersey1.x, Jersey2.x, OkHttp, Retrofit1.x, Retrofit2.x, Feign), Node.js (ES5, ES6, AngularJS with Google Closure Compiler annotations) Objective-C, Perl, PHP, Python, Ruby, Scala, Swift (2.x, 3.x), Typescript (Angular1.x, Angular2.x, Fetch, jQuery, Node)

- Server stubs: C# (ASP.NET Core, NancyFx), Erlang, Go, Haskell, Java (MSF4J, Spring, Undertow, JAX-RS: CDI, CXF, Inflector, RestEasy), PHP (Lumen, Slim, Silex, Zend Expressive), Python (Flask), NodeJS, Ruby (Sinatra, Rails5), Scala (Finch, Scalatra)

- API documentation generators: HTML, Confluence Wiki

Lots of companies are using it in production and the project is very active with 500+ contributors.

Swagger Codegen leverages another open source project "Swagger Parser" [2] to parse the Swagger specification (JSON/YAML) and the parser will later support OpenAPI 3.0 so eventually Swagger Codegen will support Swagger 1.2, 2.0 and OpenAPI 3.0.

[1] https://github.com/swagger-api/swagger-codegen

[2] https://github.com/swagger-api/swagger-parser


I'm personally not a huge fan of Swagger, however I don't think you have to worry. Since 2012, there's only been two new versions. It's not exactly a fast-moving spec that will be out of date in a few weeks.

I do believe that Swagger 3 (and the rename) will splinter the community, however I think fixing some of the fundamental issues with Swagger (many of which are enumerated in the blog post) is very important.


It's worth adding that Swagger 2.0 documents will be losslessly convertible to OAS 3.0.


Microsoft actually writes a language-specific client generator AutoRest: https://github.com/Azure/autorest

I believe at this time the majority of the azure sdks are using it, so quite an investment has been made.


We use OpenAPI in many other places too. Azure Logic Apps, Azure API Management, Azure Functions, Azure API Apps, Microsoft Flow, PowerApps. And the list just keeps getting longer. Also, the new Microsoft Docs site is driven off OpenAPI.


This issue list on the OpenAPI GitHub repo is full of requests for change. The spec hasn't changed in a long time, and although this is a fairly big change, it shouldn't be a particularly difficult change for tooling folks to adapt to. This isn't a new standard, it's just a new name, with some fixes and some new features.


Thanks for the article, I loved how they solved the "how do we propertly documment hypermedia APIs" question that I get asked in each Meetup.

I think there is a little error in the article. In the "Request Format" example the request method should be PUT.


What does 'OpenAPI 3 now specifics YAML is 1.2' mean?


Swagger 2 was either YAML or JSON. Since YAML is an evolving standard, OpenAPI 3 just specifies that your YAML should be in YAML 1.2.

Likely a change that won't affect anyone; it's just a clarification.


I see, maybe it should say 'specifies' and/or whatever other edits needed to make that read like English.


By declaring conformance to YAML 1.2 and requiring conformance to the JSON Schema ruleset defined in YAML 1.2 we can ensure that any YAML OpenAPI document can be converted into JSON without any loss of information.


Have they fixed the very slow swagger-ui project yet?


I guess every generation of developers has to do its own mistakes..

http://www.omg.org/spec/CORBA/ https://www.w3.org/TR/2007/REC-soap12-part0-20070427/


The one distinction that I think is worth making is that OpenAPI describes HTTP APIs using the semantics of HTTP. Corba and SOAP attempted to be protocol independent. That's much harder to do well. Another good aspect of OpenAPI is that most people who are using it, are starting towards using the definition as the primary design artifact and then letting tooling work on that. Almost no-one designed WSDL or IDL by hand. Focusing on an OpenAPI as a design contract helps developers produce solutions that are more independent of tooling. And that's a good thing.


There's nothing wrong with just "tunneling" SOAP or anything else over HTTP POST. Tons of APIs do this and work just fine.

Now maybe it's easier on clients when you can just curl -XDELETE something but I'm not sure it's that big of a difference in the end. Especially if you have auto-gen'd client code.


The problem is that you abandon a bunch of existing architecture and middleware: an extremely rich caching API, content negotiation, proxy info, authentication.

I can trivially set an HTTP load balancer and log status codes. Can't say the same for SOAP.


It makes it hard to take advantage of HTTP caches. There is also some redundancy because SOAP has headers and HTTP has headers. Which to use? It also is handy to be able to make idempotent requests, especially over flakey networks.


HTTP caching is a good benefit. But you can get it just by using GET and shoving the body in the querystring.

Headers: Use the SOAP ones, you're just tunneling SOAP messages.

Idempotentcy requires the app to implement it that way. HTTP doesn't really help there.


That might be true, and happens in lots of areas, but Swagger is nothing like Corba/SOAP specs and JSON is nothing like SOAP.

JSON gives most of the stuff SOAP was actually used for (except for bureaucratic spec-driven edge cases), for 20% of the complexity -- so the JSON generation did something right.

(I'm old enough to have been through CORBA and SOAP).


> so the JSON generation did something right

Rediscovered S-expressions and reimplemented them in a mix of square and curly braces ;).


Well, most people just don't like S-expressions, and never will. Get over it ;)


Shhh ... speaking ill of S-expressions is verboten here.


Your post appears to characterize any such IDL as a "mistake". What makes attempts to formalize an API in this manner inherently problematic? Or did you just mean that it's a mistake to do a different IDL for REST-specific APIs instead of using an extant one? Does your complaint extend to non-web IDLs like Thrift or protobufs?

In my experience, there is nothing inherently wrong with this type of IDL. Like all architectural decisions, it comes with its own tradeoffs, but there's no reason the tradeoff profile is inherently wrong.


CORBA may have failed, but not so much because of its design. Today we use HTTP as an RPC protocol, and RPC is what CORBA was. And today we have things like gRPC (Google's protocol, based on HTTP/2 and Protocol Buffers) and Thrift (mainly driven by Facebook, I think). Microsoft's COM, of course, is very similar to CORBA, and hugely successful.

CORBA is of course more complex, in ways that are less useful today. One particular feature was that the server always returned "live" objects that transparently proxied the calls back to the server. So you do something like getUser(123).delete(), and it would cause the User object's delete method to be called remotely. It turns out this generates a rather tight coupling between client and server; in particular, the client and server both have to use reference counting to keep objects alive as long as they are in use by a client. Things tend to get out of hand that way. While it is certainly magical to use a remote server exactly like a local one (locality transparency), it's also a performance trap.

But of course Swagger/OpenAPI has nothing to do with this.


I think what your more talking about is forward/backward compatibility features.

Much of that was independent of CORBA, you just needed to release different versions of the API, this is identical in SOAP and REST today, and many client libraries are generated from specs.


> Lastly, you’re no longer allowed to define a response for GET and DELETE (which matches how RESTful APIs work).

I get that part about the DELETE, but no response for GET sounds odd. As I couldn't find anything in the spec RC: Is there further information available regarding that?


Total guess, but they may mean request bodies instead of response since GET and DELETE normally don't have request bodies.

However, I'm not sure there is a definitive answer on if that's true from an HTTP perspective. I also couldn't find it in a skimming of the OpenAPI 3.0 spec.

In the end, REST [1] itself does not prescribe either condition and the HTTP 1.1 [2] spec doesn't either.

> A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.

[1] https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc... [2] https://tools.ietf.org/html/rfc7231#section-4.3.1


I tend to agree with the interpretation given here: http://stackoverflow.com/questions/978061/http-get-with-requ... -- that if a GET request contains a body, the server should ignore it in order to stay compliant with the spec.


Sorry, typo. Should be "request bodies"!


Yes, we decided to be opinionated and say you can't describe bodies for operations where the spec says bodies have no meaning. The text should be there, but I think the RC0 revision has some formatting issues that is hiding the text.


Isn't that the older spec which has been replaced by new RFCs though? Thought that's what I read earlier this week. RFC 7230-7237 got rid of that expectation it seems?

Imo, a flaw in the earlier spec. It's clearly the case that people have uses for requesting data via JSON, and GET is the only thing that fits expected semantics ATM. Elasticsearch takes json over GET bodies because it just makes sense.


RFC 7231 4.3.1 GET says

A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.

https://tools.ietf.org/html/rfc7231#section-4.3.1


Seems looser. Regardless, it does make sense for a number of applications to have a better query language than query-strings, or base64-nonsense-via-query strings, heh.


The text is different but the meaning is the same. A GET body must not have an effect on the request/response. i.e. You can have one, but you are not allowed to use it for anything.


That's not how I interpret it. I don't read "has no defined semantics" as "you can't make up your own". All that says to me is that the RFC doesn't explicitly define any semantics.

The other line I would guess is there because in the past the RFC was worded a bit differently, such that there's tech (webservers or whatever) out there that ignores request bodies on GET, so using it may lead to issues using those pieces of tech.

You can really do whatever you want. I'm not a fan of restrictive/pedantic intepretations of the spec, because HTTP is necessarily something that is really up to the developer in every way. Your database sure doesn't care if it's doing a non-idempotent write as a result of a web request that was a PUT.

Spec or not, it makes sense to be able to support a richer query language through the only HTTP verb specified for retrieving data. Just accepting that as something we can't do because some RFCs say this or that is a bit silly, because most apps out there can support it just fine. Elasticsearch is a much better piece of software because it ignored that bit of advice. (And I've had no problems running Elasticsearch through various different proxy layers, so software like nginx /haproxy also don't seem to care if you use a GET request body).


I can understand this approach to GET

But how would you, say, bulk delete 5 different resources? I typically send those payloads in the request body of a DELETE request.


> how would you, say, bulk delete 5 different resources? I typically send those payloads in the request body of a DELETE request

From the RFC: The DELETE method requests that the origin server delete the resource identified by the Request-URI.

https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html

If the ids are in the body, you are not deleting the resource at the URI. I think that for this kind of operation you are better off with a POST.


"Over the past few years, Swagger 2 has become the de facto standard for defining your API." - LOL. I stopped reading here.


What do you use for defining & documenting your APIs?

I ask because this is something the UK Government is looking at - https://github.com/alphagov/tech-and-data-standards/issues/3...


The best part about Swagger is there's a lot of good open-source tooling for it (though this depends on the language).

One alternative I find super interesting lately is defining APIs with grpc and then exposing them with the gateway proxy: https://github.com/grpc-ecosystem/grpc-gateway

In this way, you get strongly-typed interfaces and Swagger is autogenned for you.


I think most are crafted by hand. Maybe the problem is similar to language, compiler and interpreter design. There exists specific software that tries to help you with providing solutions for sub-problems (lexers and parser-generators), but they are hardly used, as they are hard to customize (see a recent discussion about writing a recursive descent parser), so people hand-craft them.

When designing an API the hard part is getting it right(tm) and hopefully guarantee some kind of backwards compatibility and providing a clear path for its consumers, in particular documentation. It is possible to auto-generate some parts of the documentation, but such documentation is probably as helpful as auto-generated javadoc without notes provided by humans.




And it is. If you perform some API Governance in your company, odds are that they are performed with Swagger. RAML and Blueprint are losing adoption.


Unfortunately you're mistaken. Most companies just don't care and document their APIs by hand on internal wiki. If they document at all.


From my experience it doesn't define it. However its becoming the standard for documenting your REST based HTTP API's and it's a focal point of the API itself to developers/stakeholders.


Wtf is "git flavored markdown"?

Do you mean GitHub Flavoured Markdown? It concerns me that people writing software somehow mix up Git and GitHub as being the same thing.


Whoops, fixed!


After writing a lot of long swagger specs I tried RAML which although not that widely supported feels much more pleasant to write. Any toughts?


Add API Blueprint to the list of more pleasant to write alternatives. Not to mention it's much easier to render into human-readable documentation. I too have been somewhat dismayed by the rise in popularity of Swagger when there are two competing solutions that, to my mind, are superior.


Has the tooling for RAML improved esp. around the licensing? Last time I looked, the 'standard' server implementation on the JVM had rather restrictive licensing, IIRC it was dual-license AGPL or a commercial license?

(AFAICT RAML is actually a much nicer spec, both simpler in some ways and more powerful in others, but as I say I was never actually able to use it due to the licensing issues on implementations -- only read it.)


Yes, RAML is great. I find it a lot more concise and don't see any advantage to using Swagger, despite that it's older and more widely used.


Write swagger in yaml and it's just as pleasant.


How so? At least in my experience, Swagger doesn't have anything equivalent to traits so there's no way to DRY out your docs. If you have 50 endpoints with pagination, you have to document how that works 50 times. If you change how pagination queries are structured, then you have to change your docs in 50 places. With RAML, you would just change the trait and then all of your docs and your automated testing against them will be updated.

Swagger does have $ref, but it's a much weaker abstraction than RAML traits and resourceTypes.

http://raml.org/developers/raml-200-tutorial#traits


http://stoplight.io/ lets you use traits in Swagger. I've really enjoyed using a visual tool to build the API definition, though I don't find it helps with writing documentation.


Goddammit why are they still calling it swagger (Pardon my french), i thought they came to senses and decided to use "OpenAPI"


Just read the first paragraph: "...Since then, it's been moved to the Linux foundation and renamed to Open API"


We're not :-)


Got all excited and awash with nostalgia when seeing the word "swagger" again. Clicked hoping it was about the old "swagger" Pascal snippet library that used to get passed on floppy disk in playgrounds in the 90s.

Googling finds this: http://swag.delphidabbler.com/


Swagger, OpenAPI, RAML, JSON Hyperschemas, toml/yaml/json ...

I don't see how proliferation of ad-hoc syntax contributes to interoperability, which surely should be the goal of an API spec?

My take is that JSON has "won" over XML as RPC or REST serialization format because browsers support it OOTB and there's no schema needed. It simply is the way of least resistance. And since a browser front end is traditionally tied to a web backend anyway, there's no need for a formal "API" (protocol spec, actually) spec as an external artifact most of the time.

Once you try and impose typing on JSON, this very benefit turns against you, and is getting absurd IMHO. Every JSON typing regime needs to work around the fact that JavaScript isn't statically typed. Consider JSON Schema: it is reminiscent of a markup schema of sorts, when a more rational approach would IMO be to represent JSON payloads as an RPC/IDL function signature-style schema.


Is there any plan to expand the support for runtime polymorphism in 3.0?


OpenAPI V3 now supports AnyOf and OneOf constructs from JSON Schema.


I hope it's better than Swagger 2. I really need something like it, but trying to use Swagger -- after seeing that Kubernetes was using it -- has been a disappointing experience.

I'm particularly frustrated with the lack of good documentation generators. That's more important, in my opinion, than using it to generate clients and servers.


I assume you've tried Swagger-UI[1] already? We just put out a new version (complete rewrite), you can see it in action here[2].

If you're not a fan, please feel free to open a ticket to give us some feedback.

Disclosure: I was contracted to build Swagger-Editor 3.0.

[1] https://github.com/swagger-api/swagger-ui [2] http://petstore.swagger.io/


Yes, I'm familiar with it. And I'm sorry to say that I don't like it at all.

Some things I dislike:

* It's just a flat list of endpoints. You have to click on each to get a description, and it has this sluggish animation that opens up. I want to browse.

* Way too much wasted screen real estate. The huge, wide table that shows when an endpoint is "open" is ridiculous.

* For any given type, have to switch between "Example Value" and "Model", which is awkward. There's a missed opportunity there, too: Consider the "PUT /user/{username}". Why is the "type" of the body not User? Why couldn't it be a link to the model? I.e. "PUT" takes a User and returns a User. Swagger-UI uses so much empty space to represent this very simple protocol.

* Whatever is being used to display the model is hard to use and read. You have to click on the little arrow to expand each level, and the font sizes are very inconsistent.

etc.

I do wish I had something vastly superior to give as an example, but I can find a lot of faults with every single API documentation site out there that I can think of. I do know some decent ones, though:

https://stripe.com/docs/api/curl

https://www.twilio.com/docs/api/rest/sending-messages

https://plaid.com/docs/api/

This is the level of quality you at least have to reach before my interest is piqued.

I like that these include runnable client examples in multiple languages, and includes more than just dry reference documentation; there are descriptions of actual semantics. Moreover, the documentation is presented in an organized way, by topic, and let's the user consume the information linearly without lots of clicks.

As a simpler example, years ago a colleague of mine made a simple autogenerated documentation browser with built-in request running (screenshot [2]), which turned out pretty good. It's simple, but still miles ahead of Swagger-UI in terms of usability. I sometimes wish I'd spent some effort working it into something reusable.

[1] https://stripe.com/docs/api/curl

[2] http://i.imgur.com/x5g1s2R.png


ReDoc looks pretty good - https://github.com/Rebilly/ReDoc

I particularly like its approach to presenting hierarchical data structures.


Looks promising, thanks. I think he could use a graphical designer to tighten it up, because it's pretty evident that it's made by a developer. The UX could do some work; needing to click to navigate into data structures on the page is pretty awkward, even if the effect is "cool".


I'd suggest RAML for pure documentation. Its way of structuring resources is nested with the URI, so UI generators can use that information to group together in the docs.

Also it supports union types for request/resp. Which means that you can template a URI and have multiple response schemas to go with different values of the template. Unlike swagger which forces you to enumerate every endpoint if you want them to have different response schemas, which is very ugly for logically grouped sets of resources which aren't explicitly part of a collection.


Are there any UI generators for it that are worth mentioning, with the sentiments in my other comment [1] in mind?

[1] https://news.ycombinator.com/item?id=13927657


Wow, I was expecting the price for a service like this to be around $20/month but $199/month (custom header/footer) for a service like this seems a bit of a stretch.


$199 is just our most expensive plan! You should be able to do everything on the lower plans. And, we do much more than just host Swagger :)


The example showing how response bodies are specified (media types, content) unfortunately is attached to a GET operation - please use a POST to demonstrate request bodies.


What's the advantage of OpenAPI over json hyperschema?


I'm also curious -- my guess would be the quality and availability of tooling depending on which snowball continues to grow.

I only have experience with hyperschema (from an API design perspective). Would love to hear a current perspective from someone who had experience with both.


Is there a version of swagger-editor for OpenApi? I can't find anything. Similar experience for OpenApi would be a minimal requirement.


The current Swagger-Editor supports Swagger 2.0 (aka OpenAPI Specification 2.0). OAS 3.0 will be supported some time after 3.0 is finalized.

Source: I maintain Swagger-Editor. Check it out, we just finished a rewrite last week!


Can swagger editor load and save with a file dialogue yet? I remember that being the most annoying missing detail when trying to use it.


Swagger and OpenAPI are the same thing. They made a confusing name change somewhere between 2 and 3.

(Swagger = trademark owned by SmartBear, OpenAPI = new name, run by Linux Foundation)


You are correct that the Swagger name is owned by Smartbear, however, whereas previously Swagger was used to name tooling and the specification, now Swagger only refers to the tooling built by Smartbear. OpenAPI V2 and V3 is the name of just the specification. Saying that they are the same thing causes more confusion than has already been created.


No one using API blueprint here? https://apiblueprint.org


How does Swagger work with Odata and the Graph


We have been having numerous conversations within Microsoft to figure out how to describe Graph API using OpenAPI. The new Links capability helps a lot. Also, the AnyOf support will help with describing URLs with Expand. There are still a bunch of issues to work out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: