It took a long time, but GitHub has finally (thankfully) shipped privacy settings that allow you to disable the way it broadcasts/publicizes your every move on your profile page. Does Codeberg or Gitea allow this yet? (They really should have been the first to do so...)
It would be nice to also disable the public listing of repositories for your profile page and to control it for organizations, too. (Not talking about private repos. Instead: these should be ordinary repos that remain accessible to anyone who has the link, but they are simply not aggregated into a single unified list that's available to anyone who clicks over to the "Repositories" tab. Think of them like unlisted YouTube videos, except they are all unlisted by default, rather than having to specifically designate each one as being unlisted—although that would work, too, it's just not the way it should be implemented.)
Codeberg is not for private code, it's explicitly for "Free and Open Source Software". Probably it would still make sense to have some privacy for authors/contributors, so you can't simply list all activity by going to someones profile on the website, but the repositories should still be findable, browsable and public, as that's the goal of repositories in the organization Codeberg e.V.
Off-topic, but I'm really amazed that HN account lasted for 2 years. I've seen pwdis accounts before, but they quickly get vandalized and locked out. Is this a lucky account, or is there a mod intervention?
Edit: oh wait, never mind, the profile does confirm that dang is involved. Interesting stuffs.
> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
I don't think shared accounts violates the guidelines since no accounts are routinely created from sharing it. Having an identity to relate seems to be an ideal preference, but not a hard requirement. Some people do routinely create new accounts anyway to avoid fingerprinting writing styles.
Anyway, shared accounts are interesting because it's a hack on the signin system to allow a more anonymous posting. It reminds of Stallman's aversion to password in the early days, and repeats his login ID as his password.[0]
1. What can you point to that explains or even suggests that repos "should [...] be findable" directly on codeberg.org and for "people browsing [it] and pages within"? E.g. that creating an unlisted repo and then setting up a publicly facing project landing page/mailing list/etc. that independently refers potential contributors to the repo but doesn't make available a top-level index of all such (possibly unrelated) repos that a given person controls, is something that would run afoul of either Codeberg's policies or goals.
2. What is your relationship to the Codeberg e.V. organization?
I'd suggest you read their ToS[0], specifically section 2.1(.2). It shows that they'd really like you to make all content public. To guarantee that publicity, they rake care of the listing. There is (currently?) no option for unlisted repos. just either public or private. And by the observing the way they think about privare repos, you can form an idea about how they'd think about unlisted repos.
I've read that. It doesn't say or even suggest anything like what we're actually talking about here. Someone pointing to those terms in the TOS is either confused about what this discussion is about, confused about what exactly the TOS is addressing, or both.
> 1. What can you point to that explains or even suggests that repos "should [...] be findable" directly on codeberg.org and for "people browsing [it] and pages within"? E.g. that creating an unlisted repo and then setting up a publicly facing project landing page/mailing list/etc. that independently refers potential contributors to the repo but doesn't make available a top-level index of all such (possibly unrelated) repos that a given person controls, is something that would run afoul of either Codeberg's policies or goals.
Here is some resources directly from Codeberg.org to explain the position:
> Codeberg is a collaboration platform and Git hosting for Free and Open Source Software, content and projects.
> Private repositories are only allowed for things required for FLOSS projects, like storing secrets, team-internal discussions or hiding projects from the public until they're ready for usage and/or contribution.
> Since this is not what Codeberg is meant for in a more narrow sense, stricter limitations might be implemented in the future.
> The mission of Codeberg e.V. is to build and maintain a free collaboration platform for creating, archiving, and preserving code and to document its development process.
> [2] (1) The purpose of the association is to promote the creation, collection, distribution and preservation of Free Content (Open Content, Free Cultural Works) and Free and Open Source Software (FOSS)
> [2] (2) For the collection and distribution of free content, open and commonly used Repository and Version Control Systems ("RCS" and "VCS") that save and preserve the whole history of the creation and improvement of Open Source software and make it freely available to society on the Internet, should be primarily but not exclusively used and generally made available.
1. Dodging the question with non-answers that conflate unlisted repos and better account privacy settings with the stuff that is known to be forbidden on Codeberg does no one any favors. It is no surprise that repos for non-free software and private repos for general use are not allowed. Where is the part that's relevant to what was asked?
2. Someone who was wondering whether I'd made a huge mistake by moving stuff there a year or two ago and whether I should know something about the folks behind it that would indicate that I should reverse that decision, not support it, and not recommend it (alternatively: advise against) in the future.
I sometimes fear I shouldn’t play driving games too much, so that I don’t get desensitised against those intrusive thoughts I sometimes get on the road. I wonder how irrational that actually is, because it feels like pretty much the same thing as your idea. I mean it doesn’t seem far fetched that a very realistic VR driving simulation with subtly “easier” physics might make you a worse driver in reality. More closely related to the fear of heights, we also jump from greater heights in video games than we would in real life, and much more frequently. I practically never have a reason to jump down from anywhere, so I could imagine making some wrong estimations when, after years of doing it in VR, I’m somehow faced with a real-life situation.
This is not really true. Chrome's extensions to the Web platform comprising its Page Lifecycle APIs are coherent, well-reasoned, and valuable/inoffensive, yet other browsers have not implemented them (yet; but the proposed APIs are old enough now that if what you're saying were true it'd already be a done deal).
The latest example is WebTransport. Which isn't even on the standards track. Yet Chrome already released it, and calls it "an emergent standard".
And this goes for multiple other Chrome-only non-standards.
Good thing you found an API that you decided was reasonable. Doesn't make it a standard, or that there's a consensus, or that other browsers agree with your assessment. Age of the proposal doesn't factor into it.
Literally not a standard shipped in Chrome, literally is something Chrome came up with and implemented on its own, literally only shipped in Chrome without any consensus or input from other browser implementers...
And yet "no, this isn't true, this isn't how Chrome works at all".
Is this true for 100% of things that Chrome is shipping? No. But it's so asymptotically close that the difference doesn't matter. They ship 40 to 70 new web APIs in each version. That is, 40 to 70 new Web APIs every month. Over 500 new APIs a year. How many do you imagine they even pretend to be a standard? https://web-confluence.appspot.com/#!/confluence
> You "counterpoint" isn't even a counterpoint, but just reinforces the original comment.
This sleight of hand is not going to work.
> Literally not a standard shipped in Chrome, literally is something Chrome came up with and implemented on its own, literally only shipped in Chrome without any consensus or input from other browser implementers
... and literally hasn't gotten "someone to type up something that describes the Chromium implementation into the standard". Your choice to ignore this does not bode well for whether you should be taken seriously on matters of intellectual honesty.
The fact that other browser vendors have not been forced to implement it by now contradicts what you are arguing to be true.
All of the Chrome-proprietary APIs that shipped once upon a time in Chrome but were later removed from Chrome (incl. no remaining signs in subsequent draft standards) also contradicts it.
Does Chrome ship non-standard stuff? Yes. So has Gecko. Webkit, too. Has Mozilla in particular been forced to implement some things for no reason other than it because became unavoidable at some point after it started shipping in Chrome? Yes. Does the standardization process merely consist of Chrome doing whatever it wants and the eventual result is a new standard (with nobody else being able to influence this or contribute anything of their own)? No.
> And yet "no, this isn't true, this isn't how Chrome works at all".
Nice strawman. The moment where you resort to putting words in someone else's mouth is the moment you forfeit. Goodbye.
> and literally hasn't gotten "someone to type up something that describes the Chromium implementation into the standard".
And literally exists as a spec that I even linked. Which gives Chrome and gullible devs the license to say things like "oh look at this beautiful reasonable standard that other browsers have not implemented"
> All of the Chrome-proprietary APIs that shipped once upon a time in Chrome but were later removed from Chrome
Of course they haven't. By default everything that Google ships and is not a standard is Chrome-only.
So let's look at your example, Page Lifecycle API.
- Is it a standard? No.
- Is it even on a standards track? No
- Is it shipped only in Chrome? Yes.
- Has Chrome dropped it? No.
- Does this make it a Chrome-only non-standard? Yes.
- Does Chrome drop this and hundreds of other such APIs? Of course not.
Thankfully, we can check that from two sources:
- Chrome's own Web APIs dashboard. https://web-confluence.appspot.com/#!/confluence If you click "Browser Specific", you will see that Chrome ships over a thousand Chrome-specific APIs, and that number grows rapidly.
And this is, undoubtedly on top of APIs that they pretend are standard. This is where the second source comes in
This one lists both actual existing APIs and "experimental APIs". Those experimental APIs? Most of them are "not a standard, not on a standard track" but are shipped in Chrome. I checked the letter B on that page. There are 4 experimental APIs. All of them are "not a standard, not on a standard track". All of them are shipped in Chrome.
> The moment where you resort to putting words in someone else's mouth is the moment you forfeit.
Which I of course didn't. I did paraphrase it for dramatic effect, but "this is not really true" and "all chrome-only proprietary APIs were dropped" amount to the same thing.
> "this is not really true" and "all chrome-only proprietary APIs were dropped"
First of all, that's insane, but secondly, no one is even claiming the latter. It's easy to make up things all day. Say something connected to reality.
Easy. I even gave links to show how modern web "standards" work. To quote myself: "Is this true for 100% of things that Chrome is shipping? No. But it's so asymptotically close that the difference doesn't matter. They ship 40 to 70 new web APIs in each version. That is, 40 to 70 new Web APIs every month. Over 500 new APIs a year. How many do you imagine they even pretend to be a standard?"
All this with response to literally what has been happening for the past several years: Chrome ships its own non-standards (even if it spits out a spec doesn't make it a standard), developers start using them, due to Chrome dominance its now a de facto standard.
To think otherwise is to be completely oblivious to what's happening in web standards.
Edit. As to "then someone writes a standard". This also happens. See Web HID timeline: https://github.com/mozilla/standards-positions/issues/459#is... Same happened to WebRTC, by the way. Stable spec version was finally complete in 2018, 7 years after Chrome spat it out and called it a standard. And so on and so on.
And the answer is, "no, that's not the standardization process", and furthermore, "that comment was hyperbole". If you can't admit this, then you are disconnected from reality.
None of your words or links will make the original comment true.
Only if you disregard the amount of latitude that the semantics of these headers give to UAs that would effectively thwart this method of tracking.
If I fetch your /foo.html today in November 2022, and you send me a last-modified from 1978, that gives me and my UA a huge range from which to select a different datetime (anywhere between the 1978 value and now-ish) on my next request. How are you going to correlate my original and subsequent requests if in the latter I ask if you've got a copy that's been modified since 1999?
Context is important. The replied-to comment starts off, "While this particular implementation doesn't track individuals, couldn't your trivially start tracking individuals by[...]"
An acceptable response, then (to both you and the original commenter), follows: "While some particular browser version doesn't currently protect individuals from that proposed form of tracking, any browser vendor could trivially start thwarting that form of tracking by exploiting the latitude afforded to UAs by the semantics of these headers." And that's the form that the previous comment takes and how it should be understood. The fact that "users go to the web with the browser they've been given [i.e., today, and which isn't providing this sort of tracking protection]" doesn't change anything; we are explicitly talking about steps that each side _can_ take in the arms race related to the subject of this discussion...
Would you be open to using an apostrophe ’ in your header instead of the straight single quote? I dig the Comic Sans, but it would look so much better. Cheers lol
There is always client-side state. The dropdowns, for example, have selection that affects the entire app. That selection must be stored in a state which will affect the reloads triggered by changes in the dropdowns.
Well, sure, the value of an input is in its value property, but you build your stuff in such a way that you don’t access it unless you absolutely have to.
I'm the one who gave this talk, and I can assure you there is no such thing in our code. htmx just enables us to fire some JS events and react to them by triggering AJAX calls then replacing some <div> with some HTML fragment. No state management, just a hook system.
Ok, then I've explained myself poorly. I see that there are both facet filters and favorites on the page, both of which affect what the rest of the page shows. In my mind, that's client-side state. It doesn't have to mean that its managed with JavaScript, but the state does exist; its changed any time the user makes changes into any inputs in the browser. Furthermore, those changes together seem to affect the rest of the page, if I'm not mistaken?
My question was where is the favorites (and facet) state stored. Is it in "html inputs", in which case, I suppose they are included in the requests somehow later? (perhaps via `hx-include`). The answer could also be that e.g. favorites are permanently stored on the backend...
Additionally I was also wondering what htmx can do in more complex cases, like e.g. a "sort direction" button, where you need to set the sort column(s) and direction(s) of columns. It feels like its really easy to exit the htmx comfort zone, after which you have to resort to things like jquery (which is a nightmare). Or perhaps web-components, which would actually be a nice combination...
I don't see facet filters and favorites as "client-side state": to me it's "application state", changed by a user interaction. And you're right, it's related to how the state is stored.
As you anticipated, favorites are stored in a database on server-side, so that makes "show me my favorite items" or "show me items related to my favorite articles" the exact same feature as selecting an option in a facet filter.
The state of "I have selected options 1 and 2 in this facet filter, and option B in that other filter" is simply stored in... the URL. And this is why I think it's "application state" rather than "client-side state", and this is why the hypermedia is great IMO: this whole search+facets+favorites+sorting feature becomes nothing more than a <form> with hidden inputs, generating GET requests which URLs are put in the browser history (keywords search, selected options from facet filters and sorting are put into querystring parameters). And that's great, because it happens that one of our features is to send our users custom e-mails with deep links to this the UI, with facet filters pre-selected. All we have to do is generate links with querystring parameters pre-configured, and the user directly gets to a screen with pre-selected facet options, sorting, etc. To me, such behavior cannot be called "client-side state management".
I did watch the video, iirc he literally says “is this client-side state I have to worry about? No.” multiple times. When the user changes something that affects the UI in multiple places, all the necessary fragments are fetched from the server and swapped in by htmx.
You still have to consult the state of multiple facet dropdowns, as well as the user's personal list of favorites at the same time to get the correct response, don't you?
As I said in my other answer: our facet filters are nothing more than hidden inputs in a form. So nobody consults "the state of multiple facet dropdowns", except htmx when it generates the URL of its XHR call. Everything else (filtering items according to querystring parameters, fetching user favorites, etc.) is done on server-side.
Do they? Ten years ago Walmart distribution center workers were making what the writer says he made at Amazon, which is itself surely the result of recent increases. No idea what Walmart pays today, especially in these post COVID-induced work and price changes. Four or five years ago, on the other hand, I'd heard Amazon was around $13 to $14.
It would be nice to also disable the public listing of repositories for your profile page and to control it for organizations, too. (Not talking about private repos. Instead: these should be ordinary repos that remain accessible to anyone who has the link, but they are simply not aggregated into a single unified list that's available to anyone who clicks over to the "Repositories" tab. Think of them like unlisted YouTube videos, except they are all unlisted by default, rather than having to specifically designate each one as being unlisted—although that would work, too, it's just not the way it should be implemented.)