If you're interested in hosting it at no cost on Oracle Cloud's always free tier (4 cpu, 24GB ram), instead of buying a Mac Mini or paying for a VPS, I wrote up how-to with a Pulumi infra-as-code template here: https://abrown.blog/posts/personal-assistant-clawdbot-on-ora...
It doesn't, but from my perspective the thinking behind zero trust is partly to stop treating networking as a layer of security. Which makes sense to me - the larger the network grows, the harder to know all its entry-points and the transitive reach of those.
Another interpretation of this is that the lead developer adequately mitigated the risk of errors while also managing the risk of not shipping fast enough. It's very easy to criticise when you're not the one answering for both, especially the latter.
I really wanted to adopt tRPC but the deal breaker was it being opinionated on status codes without allowing configurability. Because I needed to meet an existing API spec, that meant ts-rest was a better option. I think there's an aditional option with a native spec generator in frameworks like Hono, and maybe Elysia.
His point about the runtime complexity of an API being entirely distinct from how the interface to it's code is exposed (whether GraphQL or REST or otherwise) is fairly obvious, I think.
The counter-argument is that unlimited query complexity makes it far bigger problem, and the author's point is that if you're using it for private APIs with persisted queries, you shouldn't have that problem unknowingly.
Don't get me wrong - I think the takeaway is that GraphQL's niche is quite small, and he's defending exactly that niche. It's not often the case that you can develop an API in a private manner which doesn't undercut higher-order value in the future, as the rise of AWS hopefully made evident.
Everything I needed to know about Russell's performance war was answered when, whilst he was working at Google, folk started asking him why he was naming and shaming companies for poor performance when his exact critiques were swiftly applied to Google's apps (calendar, maps, gmail). I wish I could find the twitter thread from back then, but the gist of his response was that what Google was doing was incredibly complicated, far more than anything the targets of his ire were working on, and as such it was reasonable not to have fixed those issues.
He wasn't wrong in his assessment of complexity, but the fact he refused to acknowledge the business priorities were the same between Google and companies he called out, absolutely baffled me. The gist from my perspective was that companies external to his own should bend over backwards for performance, while his should not, because his personal goals were tied to improving the performance of the web. Hopefully that's an over-simplification and I've missed something, but that's what I can recall.
Don't forget that he's also shaming everyone for complexity while his own work brings untold complexity to the web platform through dozens of Javascript-only standards around Web Components.
> Folks seems to lack a kind of basic economic perspective.
I agree with most of what you said, but this seems a bit ironic.
Your suggestions almost exclusively involve large investments of time with little established proof that they are efficient. Do you genuinely believe reading RFC 2616 "cover to cover" is an efficient way of solving the specific problems they came across?
I would wager most developers wishing to be "really good" actually have a concrete desire like a greater paycheque or employer in mind. If that is true, I doubt reading someone's booklist necessarily is their fastest pathway, and their economic perspective is exactly what stops them doing so.
A person that spends considerable portion of their time really learning new things will initially be slower at their work than people who only do work.
But over time this person will be getting better and better and better and at some point they will be able to do their job on the fraction of their time while still keeping momentum and learning more.
This has been my experience.
I have been spending 3/4 of my working hours learning new things for the past about 17 years after I realised this. The actual work takes a very tiny part of my day.
Software development really is a knowledge sport. Writing code is actually pretty small part of the task -- knowing what to write is the important part. If you are able to "write down the answer" so to speak, without spending much time getting there, you can be 10 or more times productive than everybody else, easily.
Agree that try/catch is verbose and not terribly ergonomic, but my solution has been to treat errors as values rather than exceptions, by default. It's much less painful to achieve this if you use a library with an implementation of a Result type, which I admit is a bit of a nuisance workaround, but worth it. I've recently been using: https://github.com/swan-io/boxed.
By far the greatest benefit is being able to sanely implement a type-safe API. To me, it is utter madness throwing custom extensions of the Error class arbitrarily deep in the call-stack, and then having a catch handler somewhere up the top hoping that each error case is matched and correctly translated to the intended http response (at least this seems to be a common alternative).
In the latter example, the question is really one of how tightly you wish to couple the application layer to that of the infrastructure (controller). Should the application logic be coupled to a http REST API (and thus map application errors to status codes etc), or does that belong in the controller?
I don't disagree that it's more practical, initially, as you've described it. However, I think it's important to point out the tradeoff rather than presenting it as purely more efficient. I've seen this approach result in poor separation of concerns and bloated use cases (`DoTheActualThing`) which become tedious to refactor, albeit in other languages.
One predictable side effect of the above, if you're working with junior engineers, is that they are likely going to write tests for the application logic with a dependency on the request / response as inputs, and asserting on status codes etc. I shudder to think how many lines of code I've read dedicated to mocking req/res that were never needed in the first place.
It leaves very little to the imagination as to whether or not ServeHTTP works, which is nice.
Complexity comes from generating requests and parsing the responses, and that is what leads to the desire to factor things out -- test the functions with their native data types instead of http.Request and http.Response. I think most people choose to factor things out to make that possible, but in the simplest of simple cases, many people just use httptest. It gets the job done.
I don't think it's poor to test http handling either, as a coarse grained integration test.
The problem I've seen is over-dependence on writing unit tests with mocks instead of biting the bullet and properly testing all the boundaries. I have seen folk end up with 1000+ tests, of which most are useless because the mocks make far too many assumptions, but are necessary because of the layer coupling.
This was mostly in Node though, where mocking the request/response gets done inconsistently, per framework. Go might have better tooling in that regard, and maybe that sways the equation a bit. IMO there's still merit to decoupling if there's any feasibility of e.g. migrating to GraphQL or another protocol without having to undergo an entire re-write.
> I don't think it's poor to test http handling either, as a coarse grained integration test.
Sorry to spring a mostly-unrelated question on you about this, but why do you call this an integration test? I recently interviewed three candidates in a row that described their tests in this way, and I thought it was odd, and now I see many people in this thread doing it also.
I would call this a functional or behavioral test. For me a key aspect of an integration test is that there's something "real" on at least two "sides" - otherwise what is it testing integration with? Is this some side-effect of a generation growing up with Spring's integration testing framework being used for all black-box testing?
(I will not comment about how often I see people referring to all test doubles as "mocks", as I have largely given up trying to bring clarity here...)
The reality is that I've heard unit, integration and e2e almost entirely used interchangeably, maybe except the former and latter. I don't think trying to nail down the terms to something concrete is necessarily a useful exercise. Attempts to do so, imo, make subjective sense in the terms of the individual's stack/deployment scenario.
To me, it's a contextual term much like 'single responsibility'. In this case, the two "sides" of an integration test are present. A consumer issues a request and a provider responds accordingly. The tests would ascertain that with variations to the client request, the provider behaves in the expected manner.
At which point you might point out that this sounds like an e2e test, but actually using the client web app, for example, might involve far more than a simple http client/library - in no small part because the provider can easily run a simple consumer in memory and avoid the network entirely. E2e tests tend to be far more fragile, so from the perspective of achieving practical continuous deployment, it's a useful distinction.
integration tests in this instance: varying HTTP requests (infrastructure layer) provoke correct behaviour in application layer.
e2e: intended client issues http requests under the correct conditions, which provokes certain provider behaviour, which client then actually utilises correctly.
This, to me, is why the most important part of testing is understanding the boundaries of the tests. Not worrying about their names.
reply