Hacker News new | past | comments | ask | show | jobs | submit | mbell's comments login

I'm not sure what the current state of things are since I haven't use MySQL recently but this used to be a perfectly valid thing to do.

The issue was that MySQL doesn't use a full int to store enums. If your enum has 8 values, it stores in 1 byte, if it has more than 8, it stores it in 2 bytes. Adding that 9th value thus requires re-writing the entire table. So yes - it can make sense to "reserve space" to avoid a future table re-write.

You also had to be careful to include `ALGORITHM=INPLACE, LOCK=NONE;` in your `ALTER TABLE` statement when changing the enum or it would lock the table and rewrite it.


> If your enum has 8 values, it stores in 1 byte, if it has more than 8, it stores it in 2 bytes.

You're probably thinking of the SET type, rather than the ENUM type.

> You also had to be careful to include `ALGORITHM=INPLACE, LOCK=NONE;` in your `ALTER TABLE` statement when changing the enum or it would lock the table and rewrite it.

This is a common misconception; that's not how ALGORITHM=INPLACE, LOCK=NONE works. An ALTER TABLE without ALGORITHM and LOCK will, by default, always use the least-disruptive method of alteration possible.

Adding those clauses in MySQL just tells the server "fail immediately if the requested method is not possible". The semantics in MariaDB are similar, just slightly different for INPLACE, where it means "fail immediately if the requested method or a better one is not possible".


> You're probably thinking of the SET type, rather than the ENUM type.

Ah oui, très Pascal.


You can store 255 values in one byte, and reserving two bytes is not what that did.

And even if I did, it still leaves the inability to actually rename enums without scanning the full table at least twice (which still doesn't seem possible in MariaDB, unless I missed something there).

If you potentially want great flexibility you shouldn't be using enums in the first place but int and a relational mapping to another table.


A byte would fit at least 255 different values, right? How often is this limit exceeded in practice.


"If addition doesn't work, I have no way of reasoning about anything in a program."

Hardware addition is filled with edge cases that cause it to not work as expected; I don't see it as that much different from the memory safety edge cases in most programming models. So by corollary is there no way to reason about any program that uses hardware addition?


>Hardware addition is filled with edge cases that cause it to not work as expected

I am not aware of a single case where integer addition does not work as expected at the hardware level. Of course if you have a different expectation than what the hardware does it could be called "unexpected", but I would classify this more as "ignorance". I think we can reword it as "addition must be predictable and consistent".

This is not the case in standard C, because addition can produce a weird value that the compiler assumes obeys a set of axioms but in fact doesn't, due to hardware semantics. Most C compilers allow you to opt into hardware semantics for integer arithmetic, at which point you can safely use the result of addition of any two integer values. That is really the crux of the matter here - if I write "a = b + c", can I safely examine the value "a"? In C, you cannot. You must fist make sure that b and c fulfil the criteria for safe use with the "+" operator. Or, as is more usual, close your eyes and hope they do. Or just turn on hardware semantics and only turn them off for specific very hot regions of the code where you can actually prove that the input values obey the required criteria.


That's why I implement my own addition using XOR. Gotta build from strong foundations. It ain't fast, but it's honest math.


The difference is that hardware will treat addition in a predictable and consistent manner, but C and C++ will not. In C and C++, if you overflow two signed integers, the behavior of your entire program is undefined, not simply the result of the addition.

That's the difference between something being safe and local but perhaps it's unexpected because you lack the knowledge about how it works, and something being unsafe because there is absolutely no way to know what will happen and the consequences can be global.


> The difference is that hardware will treat addition in a predictable and consistent manner, but C and C++ will not. In C and C++, if you overflow two signed integers, the behavior of your entire program is undefined, not simply the result of the addition.

On the same hardware, yes, but the same C or C++ program may behave differently on different hardware specifically because the C abstract machine doesn't define what's supposed to happen. This leaves it up to (to your point) the compiler or the hardware what happens in, e.g., an overflow condition.

If you're planning on running the program on more than one CPU revision then I'd argue it introduces a similar level of risk, although one that's probably less frequently realised.


Leaving the behavior up to the compiler (or hardware) is not undefined behavior, that is unspecified behavior or implementation defined behavior.

Undefined behavior really does mean that the program does not have valid semantics. There is no proper interpretation of how the program is to behave if signed overflow happens. It's not simply that the interpretation of the program is beyond the language and left to the hardware or the operating system, it's that the program's behavior is undefined.


Gotcha, thanks for the clarification.


We used InfluxDB back in the 0.8/0.9 days and it worked really well, scaled nicely with the large number of metrics we were storing.

The switch to a tag based architecture in 1.0 completely broke the database for our use case, it could no longer handle large metric cardinality. Things improved a bit around 1.2, but never got back to something usable for us.

We ultimately moved to using clickhouse for time series data and haven't had to think about it since.

Where is influx at now? Can they handle millions of metrics again? What would bring us back?


InfluxDB 3.0 is built around a columnar query engine (Apache DataFusion) with data stored in Parquet files in object storage. Eliminating cardinality concerns was one of the top drivers for creating 3.0. I mention some of the other big things we wanted to achieve in some other comments in this HN thread.

InfluxDB 3.0 is optimized for ingestion performance and data compression, paired with a fast columnar query engine. So we can ingest with fewer CPUs, less RAM and reduce storage cost because it's all compressed and put into object store. And we support SQL now (in addition to InfluxQL) with fast analytic queries.

We don't have open source releases yet (that's for later this year), but we have it available in the cloud as a multi-tenant product or dedicated clusters.


It sounds like reimplementing ClickHouse, but worse. Sorry for pointing this out, but isn't it?


It actually sports much better ingest performance versus ClickHouse, and query performance is close and improving.


This is a bold claim.


We were stuck on 1.x for a long time. Downsampling seemed to be eternally broken (or rather not performant enough) regardless of versions so we wrote our own downsampler doing it on ingestion (in riemann.io).

And as world seemed to converge on Prometheus/prometheus-compatible interfaces we will probably eventually migrate to VictoriaMetrics or something else "talking prometheus"

InfluxQL was shit. Flux looks far more complex for 90%+ things we use PromQL for now so it is another disadvantage. I'm sure it's cool for data science but all we need to do is to turn some things to rate and do some basic math or stats on it.

> Can they handle millions of metrics again? What would bring us back?

we had one instance with ~25 mil distinct series eating around 26 GB RAM. I'd suggest looking into VictoriaMetrics. Mimir is a bit more complicated to run and seems to require far more hardware for similar performance, but has distinction (whether that's advantage or not, eh...) of using object store instead of plain old disk which makes HA a bit easier.


Yes, influxdb 3.0 uses a new columnar store engine (IOx) that offers "unbounded cardinality". See more at https://www.influxdata.com/blog/intro-influxdb-iox/

(Disclaimer: I work at InfluxData)


I've tried using OpenAPI a few times, it's been...lackluster... I probably won't use it again.

Here are my gripes:

1) For me one of the biggest selling points is client code gen (https://github.com/OpenAPITools/openapi-generator). Basically it sucks, or at least it sucks in enough languages to spoil it. The value prop here is define the API once, code gen the client for Ruby, Python and Scala (or insert your languages here). Often there are a half dozen clients for each language, often they are simply broken (the generated code just straight up doesn't compile). Of the ones that do work, you get random PRs accepted that impose a completely different ideological approach to how the client works. It really seems like any PR is accepted with no overarching guidance.

2) JSONSchema is too limited. We use it for a lot of things, but it just makes some things incredibly hard. This is compounded by the seemingly limitless number of version or drafts of the spec. If your goal is interop, which it probably is if you are using JSON, you have to go our and research what the lower common denominator draft spec JSONSchema support is for the various languages you want to use and limit yourself to that (probably draft 4, or draft 7).

On the pros side:

It does make pretty docs - kinda wish it would just focus on this and in the process not be as strict, I think it would be a better project.


I find it odd that you've struggled so much with generating API clients. I've generated C# and TypeScript (Angular's HttpClient and React Query) clients for my API and never had any issues with them. With that being said, I didn't use OpenAPI's Java-based code generators and rather used ones made by third-party developers such as NSwag[0] and openapi-codegen[1].

[0]: https://github.com/RicoSuter/NSwag

[1]: https://github.com/fabien0102/openapi-codegen


You said it yourself — the “official” generator is awful and very hard to modify or extend (well, you didn’t say that, but I’m saying it) and while there are many alternatives, they’re not always easy to find. I had some success with swagger-typescript-api[1], but eventually got tired of it and wrote my own generator. Despite looking around quite a bit at what’s available, I never heard of openapi-codegen, which looks quite good.

I think it’s a pretty big problem for many devs that so many of the options are mediocre and they’re quite difficult to evaluate unless you have a lot of experience, and even then it takes a lot of time.

[1]: https://github.com/acacode/swagger-typescript-api


Technically, there isn't an "official" OpenAPI 3.x tool of any sort. SmartBear/Swagger is no more official than any other vendor with 3.0+ (obviously it's different for the versions named "Swagger"!). I am working on an official parser/linter (oascomply) on a contract for the OpenAPI Initiative to set a baseline to help tool developers implement consistent support for the spec. However, this is more about the parts of OAS outside of the Schema Object.


Nswag leaves much to be desired. I deployed an OpenAPI server and the initial deployment partners using nswag begged us to change the API to suit an issue that was reported and nswag hadn't fixed in years. I respectfully told them to pound sand and deal with it manually or better yet be a good citizen on the ecosystem that they are using for free and contribute a patch. Last I checked they were still manually patching their codegen on each deploy. /Shrug

Nswag has important issues that are many years old still in their backlog.

1.6k issues, oldest unresolved 7 years old:

https://github.com/RicoSuter/NSwag/issues?q=is%3Aissue+is%3A...


I've been using redux toolkit's OpenAPI code generator for a side project and it's been pretty good. The documentation is a bit lacking and it could certainly use more work to make names more customizable. The generated code comes out looking very much machine generated. But I love that RTKQuery (redux toolkit query) has client side caching so that if I use a query param that was already used before, it will remember and just serve from the local cache.

But it's been nice being able to make a backend change, run the code generator, and then be able to use whatever API in react. I hope this type of stuff gets developed more!

[0] - https://github.com/reduxjs/redux-toolkit/tree/master/package...


I've had the same experience and gripes as the comment op, but RTKquery is one of the better experiences I've had with OpenApi based tooling.

But ya... JSONschema is confusing and doesn't really support type composition the way you think it would.

If your API model is simple, then you'll probably have decent experience with clients... But if you need "allOf/anyOf/oneOf" and to restrict "additionalProps", you're probably going to have a rough time...


Yup. JSON Schema is a constraint system, not a data definition system. https://modern-json-schema.com/json-schema-is-a-constraint-s...


This. I've found you can get a long way with NSwag, and the barrier to entry is low. I've got a post that walks through how to get a TypeScript and a C# client generated.

https://johnnyreilly.com/generate-typescript-and-csharp-clie...


I'm working on a company https://speakeasyapi.dev/ with the goal of helping companies in this ecosystem get great production quality client sdks, terraform providers, cli(s) and all the developer surfaces you may want supported for our API. We also manage the spec and publishing workflow for you so all you have to do is build your API and we'll do the rest.

Feel free to email me at sagar@speakeasyapi.dev or join our slack (https://join.slack.com/t/speakeasy-dev/shared_invite/zt-1cwb...) . We're in open beta and working with a few great companies already and we'd be happy for you to try out the platform for free!


Created some accounts for advertising? Accounts created some hours ago and just praising you endlessly…..


The original comment and every reply reads like it was copied straight from a marketing brief. This is everyone's reminder that Hacknernews is not immune to astroturfing, and it's often done much more subtly.


Definitely check out Speakeasy— we've been using them and the experience + team are fantastic


We at Airbyte are happy users of the speakeasy platform. The CLI generator is easy to get started with and generate nice api clients that are constantly getting better. Their api developer platform does a great job of managing new client builds and deploys to the package repositories as well. Super please with the experience so far.


What's the thought process behind the product with API Keys? Why do you build those for your end user, and what's the goal of someone using that. Unless I'm misinterpreting.


+1 to Speakeasy ~ Our end-users love to use the SDKs that are automatically generated through Speakeasy, based off our API.


I've been working with Speakeasy for a couple of month now to produce client libraries for our customers to use. They've finally made an OAS-based code generator that's great. In fact, it's getting even better with useful functionality being released on an almost biweekly basis. I would strongly recommend Sagar and the Speakeasy team to anyone looking to support high quality client libraries for your customers.


I'm one of the builders of an open source project (https://buildwithfern.com/docs) to improve API codegen. We built Fern as an alternative to OpenAPI, but of course we're fully compatible with it.

The generators are open source: https://github.com/fern-api/fern

We rewrote the code generators from scratch in the language that they generate code in (e.g., the python generator is written in python). We shied away from templating - it's easier but the generated code feels less human.

Want to talk client library codegen? Join the Fern Discord: https://discord.com/invite/JkkXumPzcG


I think it's worth pointing out that a lot (most?) of Fern's complaints against OpenAPI are really complaints against JSON Schema. There have been talks before about allowing other schema systems in OpenAPI - I wouldn't be incredibly surprised to see such things come up for Moonwalk. JSON Schema is not not a type definition system https://modern-json-schema.com/json-schema-is-a-constraint-s....

It's also worth noting that most JSON Schema replacements I've seen that prioritize code generation are far less powerful in terms of runtime validation (I have not examined Fern's proposal in detail, so I do not know if this is true for them).

The ideal system, to me (speaking as the most prolific contributor to JSON Schema drafts-07 through 2020-12), would have clearly defined code generation and runtime validation features that did not get in each other's way. Keywords like "anyOf" and "not" are very useful for runtime validation but should be excluded from type definition / code generation semantics.

This would also help balance the needs of strongly typed languages vs dynamically typed languages. Most JSON Schema replacements-for-code-generation I've seen discard tons of functionality that is immensely useful for other JSON Schema use cases (again, I have not deeply examined Fern).


Just wanted to chime in and say we are a big fan of Fern! It makes developing our APIs 10x easier as we get full end-to-end type safety and code completion, but the really great part is we get idiomatic client and server SDKs as well as OpenAPI output that we use to autogen documentation. Our small team of two engineers are able to ship multiple client facing SDKs because we are built on Fern!


Speed of development on these guys is huge, and have enjoyed using their SDKs. Community is getting involved in building things like auto-retry with backoff and other pretty helpful features in SDKs. Big fan of these guys!


I think that’s what most people are using it for, but having a single expressive (debatable) language to describe the contract an api offers has soo much potential.

GraphQL promised us apis that we can trust - since both the client and the server were implemented with the same schema, you would know for sure which requests the api would respond to and how, if it tried to do something outside of the schema, the server lib itself would through a 500 error. This allowed you to generate lean, typesafe clients.

OpenAPI kinda allows you to do that but for any other http api - I’ve written some code to use the schema as a “source of truth” for the server code as well, proving at compile time that the code will do the correct requests and responses for all the endpoints, paths and methods. So if you are reading the schema, you know for sure that the api is going to return this, and any change has to start from modifying the api.

And in turn this allows a “contract first” dev where all parties agree on the api change first, and then go to implement their changes, using the schema as an actual contract.

Combine this with languages with expressive type systems, and it allows you a style of coding thats quite nice - “if it compiles it is guaranteed to be correct”. Now of course this does not catch all bugs, but kinda confines them to mostly business logic errors, and frees you from needing to write tons of manual unit tests for every request.

Oh as a bonus it can be used for runtime request validation as well, which allows you to have types generated for those as well, for the client _and_ the server! Makes changes in the api a lot more predictable.

Client / server code generation can also be implemented as just type generation with no actual code being created, sidestepping a lot of complaints about code generators.

I did package it up as OS https://github.com/ovotech/laminar but no longer have access to maintain it as I no longer work there unfortunately.


> "contract first” dev

Just wanted say that this is very cool and I find it hard to understand why this is not already the norm in 2023. I've done something quite similar in a proprietary project (I called it "spec-driven development" in reference to "test-driven development").

I would first start by writing the OpenAPI spec and response model JSON schema. I could then write the frontend code, for example, as the API it called on the server was now defined. Only as the last step I would actually integrate the API to real data - this was especially nice as the customer in this particular project was taking their time to deliver the integration points.

All the time during development the API conformity was being verified automatically. It saved me from writing a bunch of boilerplate tests at least.


The reason why no one does “contract or spec driven development” is that in most real life cases the spec is not known in advance, and it’s very very hard to get the spec right before starting to write a single line of code.

It’s often much more practical to integrate early and then iterate on the implementation and the spec at the same time until reaching a stable point.


thing is - with this setup the spec is the code you iterate it as you would your implementation - it just makes sure that the its guaranteed to be correct.

What I’ve seen so often (and why I implemented this) was that the spec will be written, found insufficient, and the code updated, leaving the spec out of date.

Having to write the spec first allows you to actually iterate on it before implementing the code, with all the server / test client generators. So the frontend team can start working on its end before the backend even implements anything, as most of its test would be done against mocks / fake data generators.

Even better, since the spec is a source of truth for soo many things - validation, client test requests, server test responses, docs, as well as the paths / endpoints in both client and server implementation, it saves an enormous amount of time and communication energy to have it be implemented in one place.


FWIW, there are a few companies cropping up now doing better codegen for client libraries. I'm starting one of them: https://stainlessapi.com

Unfortunately we don't yet have a "try now" button, and our codegen is still closed-source, but you can see some of the libraries we've generated for companies like Modern Treasury and sign up for the waitlist on our homepage.

Always happy to chat codegen over email etc.


CTO of Modern Treasury here. We're very happy with the quality of the client libraries being generated by Stainless. It also integrates nicely into our workflow, we've got the releases almost fully automated whenever new API routes are added or changed.


I'm looking for a codegen tool for client SDKs for my product, would love to use your product. My email is in my profile, if you want to chat.


Sad to see it isn't just me. I had very real vibes of "surely I'm holding this wrong" in my building an OpenAPI file. And you didn't even mention tools to help deploy, just to help make a client.

To add my difficulty, the document generation inside Sphinx was less than up to date. Such that I didn't even get the pretty docs.


The Java generator is pretty good, many big companies are using it for generating both client and server code, I'm especially happy with the Java Spring Boot generator, I've been using it for both reactive and 'standard' code generation.


I am using the 'spring' generator and it works fine. Just fine. Could be better if it would buse more Spring features.

It saves hours and hours of development time. And the ability to regenerate the whole application on spec changes is amazing.


I'm sorry, but you have completely misunderstood the purpose of Open API.

It is not a specification to define your business logic classes and objects -- either client or server side. Its goal is to define the interface of an API, and to provide a single source of truth that requests and responses can be validated against. It contains everything you need to know to make requests to an API; code generation is nice to have (and I use it myself, but mainly on the server side, for routing and validation), but not something required or expected from OpenAPI

For what it's worth, my personal preferred workflow to build an API is as follows:

1. Build the OpenAPI spec first. A smaller spec could easily be done by hand, but I prefer using a design tool like Stoplight [0]; it has the best Web-based OpenAPI (and JSON Schema) editor I have encountered, and integrates with git nearly flawlessly.

2. Use an automated tool to generate the API code implementation. Again, a static generation tool such as datamodel-code-generator [1] (which generates Pydantic models) would suffice, but for Python I prefer the dynamic request routing and validation provided by pyapi-server [2].

3. Finally, I use automated testing tools such as schemathesis [3] to test the implementation against the specification.

[0] https://stoplight.io/

[1] https://koxudaxi.github.io/datamodel-code-generator/

[2] https://pyapi-server.readthedocs.io

[3] https://schemathesis.readthedocs.io


To your first point - you may think it detracts from perceived value, but you can just write your own code generator for openapi - it's not that hard, and you'll probably end up with a higher quality client that more fits your preferred pattern better.

This is still a win because you can still generate all your clients in sync with your API spec rather than doing all that manually.


> For me one of the biggest selling points is client code gen. Basically it sucks

I agree that the official codegen is not that great. One of my former colleagues started guardrail[0] to offer better client -- and server -- codegen in Scala for a few different http/rest frameworks. Later, I added support for Java and some Java frameworks. (I haven't worked on the project in over a year, but from what I understand, it's still moving forward.)

Obviously that's a fairly limited set of languages and frameworks compared to what the official generators offer, and there are some OpenAPI features that it doesn't support, but guardrail is a good alternative if you're a Java or Scala developer.

> JSONSchema is too limited

I've run into some of the problems you've described, which can be a big bummer. For new APIs I'd designed, I took the approach of designing the API in a way that I knew I could express in OpenAPI without too much trouble, using only the features I knew guardrail supported well (or features I knew I could add support for without too much trouble). It's not really the ideal way to design an API, but after years of that sort of work, I realized one of the worst parts of building APIs is the tedious and error-prone process of building server routes or a client for it, and I wanted to optimize away as much of that as possible.

Ultimately my view is that if you are writing API clients and servers by hand, you're doing it wrong. Even if you end up writing your own bespoke API definition format and your own code generators, that's still better than doing it manually. Obviously, if something like OpenAPI meets your needs, that's great. And even if you don't like the output of the existing code generators, you can still write your own; there are a bunch of parser libraries for the format that will make things a lot easier, and it really isn't that difficult to do, especially if you pare your feature support down to the specifics of what you need.

[0] https://guardrail.dev


I think OpenAPI needs to step up its game with its code generators, because that too has been an issue for me; I've seen a few issues pop up over time.

It's only useful for generating types; most generators' APIs are stubs at best, which means it's pretty much useless for evolving API specifications.

JSON has its limitations, in that its type system is different enough from other languages that back-end generated code often feels awkward.

I think that the foundation should take ownership of the generators and come up with a testing, validation & certification system. Have them write a standardized test suite that can validate a generated client, making sure there's a checklist of features (e.g. more advanced constructs like `oneOf` with a discriminator, enums, things like that).

And they should reduce the number of generators. Have one lead generator for types, then maybe a number of variants depending on what client the user wants to use. But those could be options / flags on the generator.

Of course, taking a step back, maybe OpenAPI and by extension REST/JSON is a flawed premise to begin with; comparing it with e.g. grpc or graphql, those two are fully integrated systems, where the spec and protocol and implementation are much more tightly bound. The lack of tight bounds (or strict standards for that matter) is an issue with REST/JSON/OpenAPI.


I agree that codegen + docs is actually where most of the value is. The problem I think is in the design. Having an intermediate spec makes all the downstream tooling for codegen + docs need to handle all the complexity. Any information lost (because the spec is in sufficient for whatever use case), you end up with a worst of both worlds situation, your code or docs gen tooling is less direct to use, and now is missing context.

Another way of handling this is getting the server your are interacting with to be able to generate the code directly based on their own internal knowledge of how the APIs are put together. This puts more onus on the library creators to support languages etc, but provides a much better experience and better chance things will 'just work' as there are just less moving parts.

ServiceStack is a .NET library that does this with 'Add ServiceStack Reference'[0] which enables a direct generation of Request and Response DTOs with the APIs for the specific server you are integrating with. IDE integration is straight forward since pulling the generated code is just another web service call. Additional language generation are integrated directly. It had trade offs but I'm yet to see a better dev experience.

[0] https://servicestack.net/add-servicestack-reference

(Disclaimer I work for ServiceStack).


I find the code gen less valuable than having a machine-readable spec that I can test against.


It's always nice to read and know I am not an opinionated asshole, and other people share the misery. I admit I've been duped using OpenAPI. Generating the schema via FastAPI and Nest.js works pretty well. But like you we have been sorely disappointment by the codegen.

Anyone care to suggest alternatives though, assuming we want to call from node to python? I actually believe that having api packages with types is one of the only things startups should take from the enterprise world. I thought about GRPC, I had good experience with it as a developer, but the previous company had a team of people dedicated just to help with the tooling around GRPC and Protobufs.

So I picked OpenAPI, figuring simple is better, and plaintext over http is simpler. and currently I do believe it's better than nothing, but not by much. I am actually in the process of trying to write my own codegen and seeing how far I can get with it.

are protobuf's with GRPC really the way to go nowadays? should a startup of 20 developers just give up and document api in some shared knowledge base and that's it?


NSwag does a wonderful job of generating TypeScript clients from OpenAPI specs. Definitely give it a shot before killing your current setup.

https://github.com/RicoSuter/NSwag (It sucks in any OpenAPI yml, not just ones from Swashbuckle/C#)


@sthuck I'm working on an alternative in this space called Fern. Like gRPC, you define your schema and use it to generate client libraries + server code. Fern integrates with FastAPI/Express/Spring and generates clients in Python, Typescript, and Java.

Checkout out this demo: https://www.loom.com/share/42de542022de4e55a1349383c7a465eb. Feel free to join our discord as well: https://discord.com/invite/JkkXumPzcG.


Yeah I had this experience too. I figured at least there'd be static checking that would at least make sure we aren't going off spec but there isn't really. So you just have a spec that slowly becomes out of sync with the code until it's basically useless. Just seems like double work for almost no benefit.


It depends on the tools you use. You can use tester proxies that validate all requests and responses to the specs, or you can do server code gen (with interfaces/Subclasses for example in Java) so you are forced to adhere to the specs.


I actually got the OpenAPI docgen magic to work 100% all the way into Azure API Management Service such that our branded portal's docs were being updated based upon code comments each time upon merge. It really is something to marvel at. It actually worked.

That said, I didn't like the amount of moving pieces, annotation soup in code, etc. I got rid of all of it. Instead of relying on a fancy developer web portal with automagically updating docs, I am maintaining demo integration projects in repositories our vendors will have access to. I feel like this will break a hell of a lot less over time and would be more flexible with regard to human factors. Troubleshooting OpenAPI tooling is not something I want myself or my team worrying about right now.


Good to know. I'd like to learn about the process you had set up and the number of moving pieces it required. Have you written about this process? Can I read about it somewhere?


1) hits home. The lack of a proper AST, "logicless" Mustache templates for code generation, lack of tests,... Also, OpenAPI seems to allow features in the language before at least a handfull of first class code generators have support for that feature, just because one generator supports it. And not that the others refuse, they just produce broken code


As for point 1) I fully agree. I'm using it a lot currently due to lack of alternatives, mainly with java. Swagger codegen is the one I've had most success with, but both openapi and swagger codegen shares the same problems.

For internal projects we use grpc which is a breeze to use in comparison.


The biggest issue with current attempts at typing in Ruby is the choice of a nominal type system. If there ever was a language that called for structural typing, it's Ruby.


> Our test harness didn't catch it (weird combination of reasons, too long ago for me to remember the details) & it rolled out.

This is why I'm pretty dogmatic about variable comparison in tests.

This is dangerous and stuff like this has caused a lot bugs to slide though in my experience (and maybe ops):

    expect(account1.balance).to eq account2.balance
This is safe and specifc:

    expect(account1.balance).to eq 2500
    expect(account2.balance).to eq 2500
Unfortunately I've run into a lot of folks that take major issue with the later because of 'magic numbers' or some similar argument. In tests I want the values being checked to be be as specific as possible.


One of the tenets of unit testing, I don’t know if I derived this for myself or got it from someone, is that you get a lookup on the right side of the expectation or the left, but not both.

There are too many yahoos out there writing impure functions or breaking pure ones that will mangle your fixture data. And the Sahara-DRY chuckleheads who see typing in 2500 twice (but somehow are okay with typing result.foo.bar.baz.omg.wtf.bbq twice) as some crime against humanity exacerbate things. Properly DAMP tests are stupid-simple to fix when the requirements change.


I get the intention, but a simple x = 2500 would satisfy both worlds.


I fend off the "magic numbers" people with a variable with a name that describes what the magic number is for.

  var expectedAccountBalance = 2500
  expect(account1.balance).to eq expectedAccountBalance
  expect(account1.balance).to eq account2.balance


That's my approach as well. My idea is I can change the numbers, and it still better come out right. It forces you to think through (and code) things more clearly. It's too easy to write a test with hard-coded numbers that "just happens" to come out right.


That's a good enough (initial) workaround if you're in an environment where you're in the minority (or alone) w/ your opinion and you still want to do the right thing personally.

I would advise you to try and convince your peers though and teach them the better way because I suspect that the people that you're fending off would not do what you did but rather just go w/ the one

    expect(account1.balance).to eq account2.balance
Now while parts of the code base do the right thing (in a slightly long winded way), the rest of the code base written by these other people is still using bad tests.


as long as there is

expect(account1.balance) eq 2500 before that isn't really a big problem.

Also "how to do it" is kinda different topic to "how to convince peers that's the right way"


Various things like "manual to change them" that make "magic numbers" bad in regular code make them good for testing (or at least less bad, a constant for that is still what I'd usually use, but a pretty specific constant, sometimes at the unit test level - shared ones get dicey).

Agreed on the ease of having problems of using variables on both sides.


One of the biggest ways that test code is not production code is that test code is only read by humans when the tests are failing. Whereas any time I'm working on a regular feature I am likely to be looking at log(n) lines of our codebase due to the logic that exists around the code I'm trying to write, and changing loglogn lines of existing code to make it work - if the architecture is good.

Code that is write once read k < 10 times has very different lifecycle expectations than code that is constantly being work hardened.


thanks. i do the same and consistently have to push back on reviewers (why don't they learn?) that the hardcoded number is there for a reason


What the complainers often like to do is hoist variable declarations out of the local scope. So to them having a test suite that uses 1500 in four places is wrong, and up to that point they are perfectly right.

The trick with the constants is that if they are declared and used in the same test scope, then the data is a black box. Nobody else 'sees' it, nobody interacts with it. The only time that's not true is when there's false sharing between unit tests and those tests are fundamentally broken. In that case the magic number is not the problem, it's the forcing function that makes you fix your broken shit.


(500mi / 6.5mpg) * $5.313 = $408.69

6.5mpg - I just Googled average MPG for a fully loaded Semi, could be wrong.

$5.313 - Most recent highway diesel price (https://www.eia.gov/petroleum/gasdiesel/). I'd expect this is high due to fleet discounts and such.


I feel like that average is really low. Talking with my buddy who is a trucker, he said that 6mpg is on the low end. His truck, which isn't anything special, would usually get 8-9. He said those guys getting 6 or lower are usually owner / operators who have older trucks, either because they like the older trucks or they can't afford to get a new one. Most fleets have much newer trucks, a lot of which can get up to 10mpg or more. That may not seem like a huge improvement, but over thousands of miles it adds up really fast.


Client side validation, regardless of the technology used, is a performance and UX optimization.

The backend must always validate and it's this validation that matters for any sort of 'security' or data integrity issue.


> In React 17, React automatically modifies the console methods like console.log() to silence the logs in the second call to lifecycle functions. However, it may cause undesired behavior in certain cases where a workaround can be used.

I feel like when you've gotten to the point that something like this has been proposed and accepted as a good solution to a problem your framework is facing, it may be a good time to stop and reconsider the approach of the framework.


This gets at something deeper: React has a policy of “in the dev environment anything goes; we will mess with your code and its runtime performance to help detect bugs or future bugs; you want us to do this, even if you think you don’t.”

But this assumes good QA and staging environments exist and are used correctly. In reality, many users of React test on local and immediately deploy into production. The more the environments differ, the less friendly React is to these users. Timing issues and race conditions may surface in production. And thank goodness this isn’t Python where a log statement can consume an iterable if you allow it to do so!

And for those saying “Strict Mode is opt-in” - your coworker may opt in your whole app without you knowing it. Hopefully you see the PR or the Slack thread. So much in React now enables this “spooky action at a distance.”

The React team has put a lot of thought into balancing the performance needs of large codebases with the stability needs of smaller teams with less robust SDLC. I can’t think of a better workaround. But it’s still going to cause chaos in the wild.


I agree with the sentiment of not being comfortable with dev and prod differents.

That said, with React these dev mode differences are intentionally about making obscure production issues (such as timing ones) rear their ugly heads in development. The goal is that if your code works without issues in dev, it’ll work even better in production. It’s a thin line for sure, but I broadly support the way they’ve done it so far.


can be solved this way:

- npm start when developing

- deploy on dev using the artefacts from npm run build

builds take a bit longer this way, but it guarantees QA will test using a non-dev build


> In reality, many users of React test on local and immediately deploy into production.

> deploy on dev using the artefacts from npm run build

These two statements do not fit together.

I agree that "You should have dev/test/staging/... environments" is the correct answer, but clearly that's not the reality for everyone.


> clearly that's not the reality for everyone

might not be the reality, but multiple envs is the only correct answer.


Broadly speaking I think React is in a weird spot where they have painted themselves into a corner with no obvious way of fixing it by essentially forking the DOM and doing so many things independently of the wider web platform. That approach made a lot of sense when it was first released but is just a liability at this point.

If you’re deep in React land and it’s all you know I think 2022 might be a good time to expand your horizons a bit because the landscape around you has changed a lot in the past five years.


> Broadly speaking I think React is in a weird spot where they have painted themselves into a corner with no obvious way of fixing it by essentially forking the DOM and doing so many things independently of the wider web platform. That approach made a lot of sense when it was first released but is just a liability at this point.

The problems that strict mode is addressing with this somewhat unusual approach have absolutely nothing to do with the VDOM. Rather it is that code using React is subject to restrictions (being pure functions and following the hook rules when using hooks) that are impossible to express or enforce at compile time in javascript.


Sorry just to be clear, I wasn’t tying this to the strict mode situation but was attaching myself to the concept that this is just one of several points where the framework is showing some signs that it’s time to start thinking about alternatives particularly because I don’t think some of them are fixable at this point, they are just too central to the entire project and those decisions no longer make sense.


Outside of Vue, and maybe Angular, are there any other stable alternatives with a decent ecosystem? I'd love to hop off the React train, but I haven't found anything that compares to the experience of just using create-react-app with Typescript support.


Try solid.js [1]

I have been using it for a month now and love it. If you are coming from react the API is familiar enough that you can get productive in a day or two. Reliance on observables is a big plus for me (no virtual dom diffing) and the dom reconciliation is very similar to lit. Check out the author's blog posts [2] for more details.

If are into jamstack, Astro [3] has good support for solidjs already and offers an easy way for selective hydration of dynamic parts of the UI.

The component ecosystem is a bit lacking compared to react/vue, but it pairs well with pure css libraries like bulma/daisyui or webcomponent libraries like ionic/shoelace.

And you can also wrap your solid components as web components and use them in a larger app without converting it all to use solidjs.

[1] https://www.solidjs.com/

[2] https://ryansolid.medium.com/

[3] https://astro.build/


RiotJS and SvelteJS get mentioned as alternates in my circles. I like Riot - but only done it on smaller projects.


If you haven't check out ember-cli + ember.js latest Octane release. Full typescript support, thriving community, lots of companies using it, lots of active development.


Any thoughts about [Lit](https://lit.dev/)?


For whatever it is worth this is the one I am betting on.


Aside from the bustling ecosystem, Surplus has been around for a long time, consistently beats benchmarks, and is quite lightweight and unopinionated. I've used it for a lot of projects, small and large, quite successfully.


Have you checked out https://mithril.js.org/


I remember Ember being solid for a while, but I fell off of the front-end stuff


When React first came out, it was a breath of fresh air. There were some weird kinks, but instead of getting better they doubled down on the magic, and it got worse over time.

I would really like something like React, but completely explicit - no hidden state, no hooks, no monkey patching. Everything class based, you have one render method, one method where you can launch your ajax, and you give your data dependencies explicitly instead of React trying to infer something and calling your functions at random.


> Everything class based, you have one render method, one method where you can launch your ajax, and you give your data dependencies explicitly instead of React trying to infer something and calling your functions at random.

Class based components in React got pretty close. But for whatever reason, everyone jumped on the hooks bandwagon, against which i have some personal objections: https://blog.kronis.dev/everything%20is%20broken/modern-reac... (harder to debug, weird render loop issues, the dev tools are not quite there yet)

That said, using the Composition API in Vue feels easier and somehow more stable, especially with the simple Pinia stores for data vs something like Vuex or Redux.


Reginald Braithwaite had some really great responses to the move away from OOP as well: [0]http://raganwald.com/2016/07/16/why-are-mixins-considered-ha... [1]http://raganwald.com/2016/07/20/prefer-composition-to-inheri...


It would be enough for functional components to have extensible input and outputs

    function Button({props,ref,key,hooks}){
      useEffect(hooks,()=>{...})
      return {render: "text", ...stuff}
    }


Yeah come on, the global hidden variables are completely unnecessary.


You may just want HTMX, right?


It's not a "solution". It's more akin to a lint rule imho. Javascript provides no language level guarantees. There is no static type checker or compiler to enforce these things. So we rely on linters, tests, and assertions. Yay dynamic languages.


Thank you for the feedback. I agree this behavior was confusing. (We changed it to slightly dimming the second log in response to the community discussion. This change is released in React 18.) The new behavior seems like the tradeoff we dislike the least but new ideas are always welcome.

Overall, we’ve found that calling some methods twice is a crude but effective way to surface a class of bugs early. I absolutely agree that it’s not the best possible way to do that, and we’re always thinking about ways to surface that earlier — for example, at the compile time. But for now, the current solution is the best we have found in practice.


Thanks for clarifying Dan. I think it might be appropriate for React to console.warn that strict mode is active (in dev mode) with a short description of this side effect, or perhaps the “dimmed” log message could have an explanation after it. Not all users have explicitly enabled strict mode (e.g. in my case I just started a new Next.js project) so this behaviour can be quite surprising and hard to track down why it’s happening.


The problem with logging a message is that it creates permanent console noise for people who enable it, thereby incentivising people not to. It seems like a small thing but people are very sensitive to extra logs. And if you allow silencing the log, we’re back to where we were.


I guess I was imagining a single warning message at the top, “React is running in strict mode. This may result in you seeing console log messages twice. Read more here. Click here to disable this message for this site. Click here to disable this message globally”. Not too much noise really, especially if you can disable it per site or globally.

I do see where you are coming from, I’m just thinking of my experience where I spent 30 mins+ thinking I’d found a bug or fundamentally misunderstood React all these years or whatever, and it turned out it was just the strict mode - but because I was using Next.js I never explicitly opted in, so had no way of knowing this might start happening (unless I read every release note). I’m guessing a lot of other developers using Next might be similarly confused!


I agree, at first reading this sounds just plain horrible. I haven't tried doing any research, can anyone point out why it's not?


"Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands."

- Someone


Your quote is better, but for people that have a hard time with that analogy, I like using the example of building a 4-story concrete/brick building - anyone can make a building stand strong by filling it with concrete / bricks. It takes precise engineering to know how thin you can make the walls/supports to make that building worth building.


If you just mindlessly fill everything with bricks and concrete, you likely need to make the lower walls thicker than the upper ones to support the weight...

The whole engineering process can be done without ever looking at the budget as long as the framework is given ("use these materials", "you have this much space").


And who came up with that framework?


With different constraints...

"The perfect race car crosses the finish line first and then completely disintegrates"

- Ferdinand Porsche

(I took some liberty with the translation - the idea is that everything breaks at the same time.)


"The perfect racing car crosses the finish line first and subsequently falls into its component parts." ?

* https://quotefancy.com/quote/1791488/Ferdinand-Porsche-The-p...


"An engineer can do for a dime, what any idiot can do for a dollar" - Someone else

[edit] Achshually "An engineer can do for a dollar what any fool can do for two" - Arthur C. Wellington


Brilliant!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: