You don't have to disable CORS to call routes on a different domain. Just configure your server properly so it knows which sites can have access to the routes.
So like for Python just use request referrer and then have some logic for allowed referrer/URLs on all routes or something like that? Or is that something you’d more want to do at the web server or proxy level like with gunicorn/nginx/caddy? I have gunicorn in front and have been doing similarly at the Python/route level
> If the server specifies a single origin (that may dynamically change based on the requesting origin as part of an allowlist) rather than the "*" wildcard, then the server should also include Origin in the Vary response header to indicate to clients that server responses will differ based on the value of the Origin request header.
I dislike this about CORS, especially when you'd want to configure it at the web server/ingress level, but suddenly need dynamic logic to return the appropriate header, for example here's an older example I used when fronting some apps with Apache: https://stackoverflow.com/a/22331450
I understand why it's in place (because you probably shouldn't leak all of the allowed domains), but dealing with that can be a bit awkward.
Yeah, browser security on the whole is surprisingly complicated. SameSite cookies, CORS, CORS Preflight, the in the weeds behaviors of many things are just invisible or not intuitive until you are deep into them.
I disable htmx.config.allowScriptTags, which is true by default and "determines if htmx will process script tags found in new content". Also htmx.config.selfRequestsOnly which prevents cross-origin requests.
Wasn't sure what you meant by "bringing together".
I assumed you were meaning to connect both, not replace.
Fair though.
I'm wondering whether the next big thing in this space is going to be more BE-driven and include streaming larger fragments or some other sort of improvement.
Anything that does more than Alpine and htmx so far seems to struggle bringing innovation.
It was fun once working on a Laravel project where it was somdtimes confusing for a new collaborator to make sense of which "x-attributes" were Alpine, so client-JS-related and which ones were part of server-side Blade templates, so dynamic but not "reactive".
This naming collision was preventable and misleading, but it somehow still "felt right".
I also think server-rendered reactive apps are the future. Elixir/Pheonix comes to mind but the liveview stuff seems complex for most FE apps which don't require more efficient data diff over the wire.
i'd say most FE apps are just fetching stuff from BE via rest api and while i think rpc, tanquery all this stuff hides all that plumbing by blending it, but if you get BE to render it from the get go with all the state is handled on the server you wouldn't need FE devs
an entire BE team could create a REACTy application even with existing solutions like inertia.js
going back to htmx, this is interesting aspect of morph:
Help me understand why you need the JSON in these examples…You’re rendering JSON data on the frontend just to transform it into HTML. The HTML is rendered server-side, so you should be able to skip the JSON and embed the data in HTML, right?
`<input name="firstName" value="John">`
That's literally the whole point of CORS.