I don’t see that quote in the article at all, and it doesn’t appear that the author claims any kind of trickery or sneakiness to trick his team into using Racket. It seems like the team built the tool together.
(In case future readers are confused, the original HN title had odd nested double-quotes, and the innermost quoted part was something like "how I tricked my employer into using Racket". Which seemed like terrible PR, giving the impression that this is something for irresponsible fringe fanatics.)
The idea of using a DSL to get around constraints in an upstream language you don't control is indeed a nice one, and one I've used before to a good degree.
I'm sure I'm going to get downvoted for this, but the use of Racket is a smell here. Why not use whatever language you were using before to turn the input DSL into the output OpenAPI spec? Sooner or later, another engineer is going to have to maintain that. You already have another language you're using for other things, and you're introducing an entirely new language and ecosystem to do one small thing. Is this something that couldn't be done in your existing stack?
> Problem: while OpenAPI is great for describing APIs, generating client SDK code and defining your API contracts, it's definitely not a great experience to write OpenAPI documents from scratch.
Personally, my main problem with OpenAPI is that it is commonly used for documenting contracts, but very rarely for actually driving the implementation. In practice, I have seen a few implementations like OpenAPI Generator [0] but a) with dynamic languages like Python code generation is an anti-pattern, and b) even disregarding the former, the generated result is (in my experience) very incomplete and still needs a lot of manual coding.
I would like to see more frameworks that utilise OpenAPI for routing and request/response validation.
It's always a bit risky to document contracts with the left hand and implement them with the right. The best case to me is a framework that serves both functions at once.
I built something like this at my last job by having an opinionated layer on top of Flask in Python. The idea was to have a small set of standard operation types (think:create, retrieve, update) and choices as to input/output types (via the marshmallow library ) and the function to process the route. This turns out to be enough to generate both the api contract and the boilerplate for routing, (un)marshalling, etc.
Yes, FastAPI [0] takes a similar approach. The problem here is that the single-source-of-truth for your contract is not OpenAPI, but rather your Flask implementation, with the OpenAPI being a documentation artefact; this is problematic for two reasons:
1. Just like any other documentation, it is difficult to verify that the documentation is actually correct and does not diverge from he underlying implementation. This is a lesser issue, as it can be mitigated with good integration testing practices.
2. The bigger issue, for me, is that any clients depends on the service implementation. In practice, it is very common to build clients -- especially complex ones like React or mobile UI -- alongside the service development, and if the spec is defined first both can be done in parallel. With the above approach you either need to a) wait for the service implementation to be complete before you can start implementing the client, or b) base the client on an OpenAPI spec which might potentially differ from the one your framework will generate based on the implementation.
I have recently worked on a project which had that same issue, and our initial solution was to build tests which would compare the generated OpenAPI with the design specification, but that turned to be extremely complex when we started running into all of the edge cases.
The alternative was to treat OpenAPI as the single-source-of-truth, using it to generate routes and execute validation over requests and responses. The first attempt used Connexion [1], which proved to be a bit too incomplete for our needs, so we implemented an alternative framework [2] (which includes a basic client-side support as well).
There's a lot to unpack here, but I'll focus on #2, specifically the client dependency.
My approach is to assign projects to teams with members who can work on both the client and server layers so that the question is less of client blocking on server and more of client and server working together.
In addition, I like an "API first" approach, which in this case means building the signature of the API functions (e.g with mock data) before finishing the implementation.
That is: client is still blocked on server to define the API, but they are not blocked on implementing the API and client works closely enough with server (or is able to do both) such that they aren't blocked on the definition.
As always, many software problems devolve into people problems once you stare at them hard enough.
> ... and if the spec is defined first both can be done in parallel.
You could simply write the complete interface down in Python/FastAPI without actual implementation and generate the OpenAPI spec from that interface. That way both teams could start soon.
Oh most definitely, there is a bunch of ways to solve this concrete problem. But none of them addresses the conceptual issue of the specification authority.
In Perl there is a framework called Mojolicious that has an OpenAPI plugin that does not use a code generator.
I find it extremely elegant as it uses annotations in the spec to route the endpoints to the desired controller method.
We often need several dialects of our API, and now we can just deploy the same code, but with an altered spec. Very little overhead.
Sorry, I've heard this statement a few times, code generation in dynamic languages is an anti pattern. Do you happen to have any more reading on this?
Sure the generated code is usually not readable which is against the zen of python. But writing code to generate code, especially if it's contract driven, is the correct approach no matter the language.
So really what I'm looking for is more substance of why generating code is an anti pattern in python but not in C.
My personal feeling is that it just doesn't fit into the workflow. With C you always have a compilation step and thus a build process, so a code generation step fits quite naturally into your makefile (or other build script).
With Python there's normally no build step – you just run the program. Adding code generation means adding another step, which you might forget to run, leading to confusion. If you make the build step mandatory you lose some of the upside of dynamic languages.
I've always solved this with unit/functional tests. The test runner effectively acts as a compiler, for the purposes of checking the contact. in this process.
This is something that's baked into a gate check on merge to the source repo. Or rather it should be.
So code generation isn't an anti pattern, persisting the artifacts generated is? To me that is still code generation, one is more durable and able to be tested, the other is a runtime exception.
I've not seen any long lived python projects that don't end up implementing runtime checks to validate the shape of these generated objects.
1. Code templates, which prepare some stubs but you need to complete it manually. This is what openapi-generator does, and this is anti-pattern because very rarely this approach can't be avoided in favour of simply using code constructs like inheritance or composition.
2. Intermediary code, which is not to be updated manually and is fed directly into interpreter or compiler. In compiled languages it is fine, as this intermediary code is not executed directly; what matters is whether it compiles. In interpreted languages, however, that code gets executed directly, and it can be really difficult to debug (as there is an additional level of abstraction, introduced by the macro). But ultimately there is no need for this type of code, as most interpreted languages (Python and JavaScript definitely) allow dynamic construction of various programming constructs (like classes and functions) on the fly.
This is where I’m really struggling with this story. Now, they have two places documentation can reside: in the code for the API itself, and now in a separate file encoded with a custom LISP-like language?
I feel like the author is forgetting how software gets developed in the real world... or just human nature in general.
Documentation created with this approach will instantly go stale, and will likely get out of sync with the true API very quickly.
How is this a good solution at all? How is “tricking” people into this unsustainable situation a good thing?
It all depends on how concise and easy to use this is. My experience with yaml/json API generators is that they are extremely buggy due to myriad edge cases. If it goes stale, it's because it takes an hour to figure out how to represent what you want to say in the language, vs. just writing a json blob with some non-machine-readable type annotations.
This is self-contained enough that I could see the language not mattering too much. And while I don't use Racket at all it sounds like the ideal language for this task. I'm not saying it confidently, but I wouldn't be surprised if the Racket implementation is less buggy than any non-lisp. Generating YAML/JSON is a task which lisp might be uniquely suited for.
It really irks me how some treat the practicalities and existing choices of real world software development as an annoying impediment to getting to use a language they're interested in. Whether you're interested in a language and whether you should use it in production can be two separate things. If you've got a compelling reason to use a given language in production, you probably don't have to trick your co-workers into doing anything, you just have a conversation with them! And if you "trick" them into using it for something small, aren't you going to sabotage your long term relationship and trust with them for minimal gain?
> If you've got a compelling reason to use a given language in production, you probably don't have to trick your co-workers into doing anything, you just have a conversation with them!
Nothing in the article suggests that conversation didn't happen. The only mention of "tricked" is the headline the submitter gave it on HN.
Little good nice story of how to introduce a new tool for work without getting into a crusade. I guess however the manager/architect didn't care much of the additional dependency added to the project thought.
I’ve recently built a thing at work to generate an OpenAPI spec from Clojure specs and some annotations, which worked out surprisingly well. We already have a collection of endpoints through compojure, and we just spec the handler functions and add some annotations for docstrings, and that’s basically it. As an added bonus we have the test suite validate all api calls against the spec, so in theory the OpenAPI spec should always be accurate.
For the curious, we took and hacked https://github.com/metosin/spec-tools to output OpenAPI 3, and added a very thin shim to hook that up to our routes.
Seems like it could be convenient to use typescript in some way. Not to code the DSL, but as the DSL itself.
It's language specific, but if a variation with minor attributing extensions existed that seamlessly round-tripped to yaml, the verbosity would just completely collapse.
Readability and creating from scratch could be an order of magnitude more efficient for anyone familiar with JS (reading a handful of interface definitions vs. human parsing back and forth between the docs and YAML).
Why the phrase "pet tech"? This is a good way to "ask for forgiveness, not permission" which I approve of. But is the goal of adding "pet tech" necessarily a good one? If technology is actually going to move your firm forward long term and is a good long term decision, it's not pet technology. It's a good decision, and it's something that hopefully your team and firm is behind.
If it's a good decision then you shouldn't have to ask for forgiveness rather than permission. It's also, if it's not just a pet technology, a strategic decision that impacts everyone on your team and it's pure ego to think you should make that decision alone.
I agree with both points. I do think there are orgs where, due to shall we say "personality constraints" you can have majority consensus that a decision is good and should be made but it can still get roadblocked. Of course, such situations are an indicator of a dysfunctional org; in a functional one, you have these kinds of open discussions and like you said, you don't need to ask for forgiveness rather than permission, and you can make these strategic decisions together as a team.
Please don't try to nest quotes of the same kind like this. It reads as 3 parts, quoted - unquoted - quoted, and makes no sense like that.
Yes this is a very pedantic thing to complain about and I do understand what is meant. But it does bother me. Yes, I do realize it's using slanted quotes and not just " - still doesn't read right for me.
edit: Got confused by the title not actually matching the article title and having click-bait information added to it that didn't appear in the original.
It's definitely something that should be justified in a PR. Introducing a new tool, it's dependencies, maintenance, knowledge and training requirements is something that should be agreed upon by the team. For agencies especially, tooling is a strategic business decision.
Where in the article does it appear? From what I see, the only claim in that direction is the random(?) headline the submitter added here. And even then, it doesn't have to be interpreted that way - it could also apply to "they'd normally not consider a language like this, but for this specific case it seemed to make sense".
Now you are the single point of failure for this service, and worse, you forced your team to learn something they may have had zero interest in looking into.
Never do this in prod without a heads up and approval from most of your team.
I worked with a guy who after completing his tickets instead of looking for more tickets or helping his team, spent his time rewriting the entire clientside application in typescript. He was let go the following month.
Am I missing something? The only thing pointing in that direction I see is the submission title here on HN, which is in no way reflected in the article?