I’m grateful to have the option if and when the time comes. Even before that age, QoL needs to be of a certain level or possible to improve, else I’m unsubscribing.
+1. Linear is a great PM tool, maybe even the best. What makes it awesome is their support team. I’ve been in touch with them a handful of times over the past ~5 years, and each time, they’ve been excellent — fast response times, with genuine and tech savvy people on the other side.
I love these kinds of products, and welcome any competition in the space. But, this comparison to Nango doesn't seem accurate, so I feel inclined to comment.
Please correct me if I'm wrong, but you say...
> Even though [Nango] seem to have more integrations
Nango has north of 100 integrations, Revert seems to have 4 atm?
> our integration support is better than them in terms of the depth of use-cases allowed (more standard objects supported, custom properties, field mapping support, custom objects (soon) etc).
How so?
Nango Sync gets you easy access to the raw API responses from the 3rd party service, and lets you map that to whatever shape/model you, as the implementer, want to end up with.
Revert seems to return standardized/normalized objects per data model (e.g, company, contact, task) across the 4 different integrations currently mentioned. It also seems to support "custom mapping" past the "lowest common denominator" schema, by adding `sourceFieldName` -> `targetFieldName` mappings (but seemingly only for picking out response key if they're strings, not any "pick from object", or "compute based on multiple properties"?)
Please don't take this as discouragement -- it's a great space to play in, and there's a lot of room for improvement. But, as a _very_ happy user of Nango over the past 10+ months, I feel you should compare yourself honestly at the very least.
> Even though [Nango] seem to have more integrations
We agree Nango has more integrations and we love OSS software so I'm with you on this. Credit where credit is due and we don't want to make false claims at all. We never claimed to have more integrations than them. I'm not sure how what I posted came off as dishonest.
> but seemingly only for picking out response key if they're strings, not any "pick from object", or "compute based on multiple properties"?)
I'd say we support this perhaps in a different way.
I have not used Nango myself to comment on specific ways it handles data vs how we handle it.
Its great that you're liking Nango and we want OSS/better product to win regardless.
Yeah, sorry, I just got caught up in your wording. Since you asked: "Nango seems to have more integrations" feels disingenuous, when you're comparing 4 to 100+. You'll likely be asked to compare yourself with Nango a lot, so it's not a bad idea to know what you're up against.
In any case, I wish you the best of luck with the "one model per resource type" concept you're trying. It's a tricky one, since you're usually stuck with the lower common denominator.
I expect many, if not most users will need additional custom mapping (so if "field A" -> "field B" mapping is the only option for now, expect to run into lots of feature requests that need to pick from objects/compute multiple values into one field. DX around this will be important)
After buying in to OpenAPI as the fundamental source of truth (generated via https://www.ts-rest.com contracts in my case), I have radically changed how I think about web development.
At this point, it's frankly absurd to me how many people out there make it so hard for themselves by manually building/typing/validating both client & server-side responsibilities in most typical REST setups. I get it -- I spent >2 decades in that reality, but no more. I will die on this hill.
I am likely understating the impact when I say I'm 2x as productive whenever I touch API related functionality/validation on client or server-side.
MSW, Zod, react-hook-form+ZodResolver and several other tools in the OpenAPI ecosystem are simply incredible for productivity. The project I'm on now is certainly the most fun I've ever built because of these amazing tools.
> The big bonus of the human documentation approaches today is that time is somewhat combined with building the client.
This is wild to me; human documentation is absurdly error-prone and it's almost always and immediately out of date. (Zod or other DSL) -> OpenAPI -> generated docs (and types! and clients! and mocks!) are always going to be better; always accurate, and faster. The upfront cost is slightly higher, but the ROI is _significant_.
OpenAPI specs lend themselves to excellent docs, ala Mintify or Docusaurus. Even interactive ones, like Swagger UI. The vast majority of API browsers & tooling understands OAPI, so why re-create (an often incomplete) version of the truth when using those tools?
> Whatever is overall fastest and gets me on to the problems I'm really trying to solve.
You may start (slightly) faster, but you'll incur significant cost when you move past the "trivial implementation" stage.
For instance:
- Do you do request & response validation on the server? That'll often need duplication on the client (e.g, error messages, and once out of sync, client-side validation mismatches server-side response)
- Typescript on client & server? Then you're already doing the manual work (often more than once) that oAPI->types would get you for free.
- Implementing client-side XHR calls manually, and getting typing right, is a pretty significant undertaking. Multiply that by the number of client-side stacks the API will be consumed by. Or, just generate them via OAPI (or real-time infer via something like ts-rest)
- TS on client, but another BE stack? OpenAPI, when used right, ensures the "contract" is 1:1. When BE changes, client needs changing -- or it breaks. You want this safety.
- Manually mocking API responses is wasteful; write good oAPI specs and auto-generate mocks (e.g, MSW).
- Do you test the real API implementation? OpenAPI specs can help you do that automatically.
At this stage of my career, I would turn down a job offer from a company/team that wasn't willing to use OpenAPI or equivalent single-source-of-truth (*unless I'm in a truly desperate situation)
Cool to see more of these kinds of projects, nice work OP!
I'm a huge fan of this general concept, so you're definitely on the right path imo. That said, two things are jumping out at me:
- Users would still be writing OpenAPI specs/JSON Schema by hand, an incredibly annoying and tedious process (unless using a nicer DSL)
- Generation/build steps are annoying (but likely unavoidable here)
As pointed out by many other comments, an unfortunate amount of teams aren't writing OAPI specs. I personally feel this is a major mistake for anyone building an API, but that's a discussion for another day.
I've been using https://www.ts-rest.com, a similar project, for a few months now. Instead of relying on OpenAPI specs as the source, you define the "contracts" using Zod. These contracts are essentially OpenAPI specs, but Zod makes for a MUCH better DSL. I really like that...
- Real-time type checking without generator steps, both server & client-side. XHR clients like `fetch`, `react-query`, etc clients are inferred from the contract.
- The Zod contracts _can_ generate/output OpenAPI specs and Mock Service Worker mocks for testing (as needed, not required)
- (Optional) Automatic request & response validation based on Zod contract definitions, no additional code needed.
- Works very well with `react-hook-form` & `ZodResolver`, so e.g an invalid value on client-side shows the same error as the API would have if request were fired.
- Zod types can be used as typescript (z.infer), a wonderful way to re-use e.g model types in client-side components.
This ts-rest experience has fundamentally solidified a belief in me: One single source of truth, propagating through the system, is _the_ best way to build safely and efficiently.
I am almost ashamed to look back on the many, many projects I've worked on where APIs and client-side did not share truth. Duplication of (hand-rolled) types, error messages, etc is horrific in retrospect.
I don't want to think about the amount of bugs I've come across due to this dual (at best) truth approach.
Thank you for your encouraging words and insights!
There are indeed popular DSLs and code to openapi solutions out there. Many of which are easy to plug in to the openapi-stack libraries btw!
I guess I personally always found it frustrating to try to control the generated OpenAPI output using additional tooling and ended up preferring yaml + a visualisation tool as the api design workflow. (e.g. swagger editor)
But something like https://buildwithfern.com, or using zod as substitute for json schema may indeed be worth a try as a step before emitting openapi.
Good point, and one that counts towards ts-rest atm; If you're bringing your own OpenAPI spec, there is (not yet) an OpenAPI->Zod converter available.
The great thing about OAPI is there's _so much tooling_ available, but it can be daunting and very frustrating to find the "right one". I spent more hours than I'd care to count wading through the ecosystem,
Perhaps it'd be a good idea to promote a few tools via your project? I suspect many potential users would fall off early because they (imo wrongly) believe the upfront cost of writing the OAPI spec is too much to ask. I do understand the reaction if they don't know of good DSLs, though.
PS: ts-rest's video on the front page is what immediately convinced me to try it out. Your current interactive example is nice, _but_ it doesn't product type errors for me so the value isn't as immediately obvious (I'm assuming watch doesn't work in the sandbox?).
> Users would still be writing OpenAPI specs/JSON Schema by hand, an incredibly annoying and tedious process (unless using a nicer DSL)
My big foot gun for this is that you can manually write jsonschema that doesn't have a nicely serializable java representation, making it hard to use cross-language, and you don't find out that that's the case until you try
Weird take. Since when does “pushing code” equal “releasing to production”?
FWIW, I love this mentality for healthy, work/life-balanced teams. It works if you structure around it. I often get the itch during the weekend to do a little work, so I go for it, because I can balance my days as I see fit. Ride the wave of creativity when it arrives, go do something else — like taking a hike, or running errands — when you can’t get shit done.
I assumed the GP was just concerned about somebody pushing code to a shared trunk branch that happened to break a build. Though as it happens I do know of some shops that literally do have a CI/CD pipeline that's so automated such a push would result in a production update (of course such a set-up wouldn't even allow a push if it did break the build, or cause any of the steps in the pipeline to fail). But yes, I'd normally interpret "pushing code" as pushing to your own feature branch, in which case I'd wonder "is it really so unusual for devs to do this out of hours"? I've certainly done it on weekends before, but with no expectation that anyone would review/merge the changes until normal business hours.
> pushing code to a shared trunk branch that happened to break a build
You revert that commit, then. It's broken. A revert shouldn't be take as some mark of failure or personal offense, it's just that your commit breaks the build, so it's getting removed from `main` until such a time as it doesn't do that. It happens to us all, from time to time. Those reverts almost always carry what at any profession organization should be an implied "feel free to bring this in a PR again, with fixes".
(And ideally, a bit of introspection as to "why didn't CI catch the failure when it was still on a branch?")
Personally I don't see why it's likely to cause major issues to allow people to merge PRs outside of business hours either, but depending on your team workflow and build pipeline (which may not be fully automated for any number of reason) it's not that hard to imagine cases where it's preferable to ensure all pushes to trunk occur during business hours (e.g. for cost saving reasons maybe you don't keep particular services running 24x7 that are needed to allow your integration test suite to run fully).
It’s blatantly obvious that the quote from the article isn’t talking about pushing to #main and potentially breaking production during the weekend. If a team practices true CI/CD, they surely have excellent safeguards and 1-click (if not automated) rollbacks in place.
I imagine the only people who would think otherwise have some pretty dysfunctional repo setups and/or policies.
I sadly suspect the number of development teams with dysfunctional repo setups/policies somewhat outnumbers those with best practice configurations! Actually I can't say I've ever worked anywhere that would truly have qualified for the latter category.
Is that with the understanding that your employer is more amenable to you taking time off during the weekdays if you do happen to spend time working on the weekends? Because many employers would want to have it both ways and not allow you to take time off the weekday just because you worked during the weekend.
Yes, everyone at my $WORKPLACE is free to adjust their work schedule to whatever best suits them; we don't care about ass-in-seat or hours of work, we care about output and impact.
This isn't just lip-service, either. May be hard to believe from a US perspective, but for a Norwegian based company, it can be taken at face value. It's the best place I've worked in my 22+ years.
Minor asterisk; we do have a few hours every week where you should be present (check-ins, meetings, all-hands, etc), but it's <5h/week for most people.
No worries, it's very believable as I worked in a role like that as well. However, there were certainly issues like production-related, getting unblocked by teammates, etc that necessitated being on during much of the workday, so most people didn't work weekends on top of that either. It wasn't the US-centricity that caused this outcome but rather technical issues. I assume if your company does not have the same issues then it's much easier.
In reality, most people are working core hours, say 10am-3pm local time. Most are in UTC +/- 2 hours. I’m an outlier, remote from B.C, Canada.
It’s usually best to stick with a “normal schedule” for family/socializing reasons, anyway.
But, it’s incredibly freeing, being able to work when the mood strikes. I used to be a night-owl, often getting super creative in the evenings. I could force myself to bed, sure, but a 3-4 hour stint in the evening would regularly produced outsized returns. 8-9am+ couldn’t remotely compete, creativity-wise, so why force it? Build a culture around trust and impact, it works out a lot better than anything else I’ve seen in practice.
I rarely do late nights anymore, having transitioned to 6-7am starts, an hour or two hiking around lunch with the dogs, and a small handful of hours of wrapping up & planning for tomorrow. But that was all my choice :)
I’ve looked everywhere for this in NodeJS & adjacent stacks; almost all migration tools seem to focus on tables, columns and rows. None seem to deal with views, functions, triggers.
I only got back into Postgres this year, after almost a decade away from SQL. It’s kind of bizarre to me that the migration tooling is still at the stage where a 1 line change to eg a Postgres function requires a the whole function to be dropped and re-created?
I understand this is needed at the db level, but surely a “definition” that generates the final migration is doable; it would make such a huge difference in code reviews and to understand how a function/etc changed over time.
Am I just looking in the wrong place? Does this exist? If not, how come? Is it really that hard to do?
Thanks! I’ll give it a look (their docs are offline atm)
DrizzleKit and several others do this for table changes, but nothing I’ve found (possibly excluding Flyway and other Java options) do views/functions/etc.
The migra maintainer has abandoned it for greener pastures. If you’re interested in sponsoring an alternative, send me an email. My pgmigrate tool has all the groundwork necessary to make this possible but I have held off implementing this because I am not personally interested in using it.
I’m not too keen on sponsoring an alternative when the dev doesn’t particularly care about the feature — it didn’t work out last I tried something similar. But, perhaps there’s another path: Supabase.
They are in (imo) desperate need of better migration tooling, and they sponsor several open source projects that boost their over-all offering.
AFAIK, they haven’t done much in the migration space yet (aside from their alpha db branching feature), so I expect they’ll co-opt an open source solution at some point soon. May be worth pinging them? Seems it could be a win-win there.
20 years ago, Steam was the reason I stopped pirating games. Netflix et al had the same effect for TV & movies.
Today, I subscribe to 5 different streaming services, and occasionally do a month of various “channels” in those apps.
I want to pay for content, but the camels back is about to break. I truly don’t want to set sails again, but the bullshit has been adding up for a while. Ads is where I draw the line.
Much preferred to, say, ættestup: https://youtu.be/DwD7f5ZWhAk?si=WjwnN7cZC1h7TcJU