And even with JS enabled, it now needs more network round-trips, which is noticeably slower, even with a very low-latency connection to the server. For example, loading https://mastodon.social/@Gargron/ takes 1.2s to display the posts (or 3.3s when logged in), with a warm cache and 5ms ping to mastodon.social.
Zulip seems to rely too heavily on Javascript to be indexable by search engines. I copy-pasted some sentences from month-old posts on https://leanprover.zulipchat.com/ into Google and Bing, and neither could find the posts.
Search engines are great at rendering content using JavaScript, they're just not able to explore the Zulip web application's URL structure, so they tend to index one page per Zulip installation.
The current solution is that projects that want their publicly available content to be search engine indexed set up https://github.com/zulip/zulip-archive, which usually involves hooking a GitHub action up to GitHub Pages for a 0-infrastructure deployment. Ultimately, that's the same model as an IRC indexing project like this one: a separate tool from the chat server is responsible for search engine indexing.
Lean runs one here: https://leanprover-community.github.io/archive/, but it looks like it hasn't updated in a year, so likely whoever in the Lean community needs to investigate why it isn't updating.
https://github.com/zulip/zulip/issues/21881 tracks our goal of making the server natively offer search engine indexing. The current separate archive tool model has some advantages (search engine load can't break the server, for example), but I think it'll be worth doing the native version when we can find the resources for it.
Worth noting that the comment closing the issue mentions:
> You can disable Google Analytics in about:addons by setting your Do Not Track status to on.
> Again: this only affects users who visit the page with Tracking Protection on (which automatically enables DNT) or who manually set their DNT status to on.
but Firefox removed the DNT control last month (https://bugzilla.mozilla.org/show_bug.cgi?id=1928087), though it kept the Tracking Protection control and privacy.donottrackheader.enabled is still available in about:config.
Are Rust ABIs even guaranteed to be stable for a given compiler version and library features? ie. doesn't the compiler allow itself to change the ABI based on external factors, like how other dependents use it?
The Rust ABI is unstable, but is consistent within a single Rust version when a crate is built with `dylib` as its type. So Debian could build shared libraries for various crates, but at the cost of a lot of shared library use in a less common/ergonomic configuration for consumers.
The thing is that even `dylib` is just not very useful given the extent that monomorphized generic code is used in Rust "library" crates. You end up having to generate your binary code at the application level anyway, when all your generic types and functions are finally instantiated with their proper types. Which is also why it's similarly not sensible to split out a C-compatible API/ABI from just any random Rust library crate, which would otherwise be the "proper" solution here (avoiding the need to rebuild-the-world whenever the system rustc compiler changes).
Concrete example: the Apache Arrow format is designed to allow passing arrays across languages and libraries without conversion; so many public functions/methods (and FFIs) will have Arrow arrays in their signature.
However, the Rust Arrow library bumps its major version every few months because it changes methods and auxiliary types, without changing the memory layout of arrays. But this means you can't have two dependencies (eg. an ORC parser and a CSV serializer) that depend on different versions of Arrow if you want to pass arrays from one dependency's functions to the other's.
And vendoring like you mention won't help, because the Rust compiler wouldn't recognize both vendored Arrow crates as compatible.
Use a single version of the arrow library, and build everything against that one. I don't see where the problem is.
I don't use Rust or third-party build or packaging systems -- I usually recommend that people don't do so either, but I understand that cargo is part of the appeal of Rust.
I'd say just build whatever you need to do, this way you won't be limited by arbitrary restrictions others have decided.
I can't always do that because dependency A depends dependency B that depends on arrow 52, and dependency C depends on arrow 53.
And sure, I can soft-fork dependency B to add support for arrow 53, and dependency A to depend on my soft-fork of dependency B, but it quickly becomes messy. Especially when in parallel I send unrelated patches to A and/or B.
Eh. Software is full of cliches and idioms. I shouldn’t be writing my own logging library and I sure as fuck shouldn’t be writing my own security library.
Also don’t write your own billing system unless you’re a finance company.
This is common for huge open source projects. However, when you send a patch to Gitlab, they will assign someone to guide you through the process, and the patch will be merged eventually unless you bail out or they outright reject it.
reply