Sure. They had a creation mythos like everyone else. What they didn't have is evidence of a real precursor civilization to ground those myths. The classical Greeks could see Mycenaean ruins.
Not as many as later civilizations but there are buildings that likely pre-date the Sumerian civilization like the desert kites. And in Syria and Turkey there are megaliths and ruins which are older than Sumer which builders the Sumerians might have know of from oral history.
> They believed their writing was gifted to them by the Gods.
This is an essentially universal belief in the past, and not just about writing. People are able to notice that their lifestyle depends on technologies, and that the only way to learn those technologies is for someone else to teach you. So they decide that the technologies on which their lives depend - pressing olives, farming grain, writing, harvesting wool*... - were taught to their ancestors by the gods.
In the case of writing specifically, the ancient Greeks attributed it to Cadmus, who was not personally a god. But (1) he was a hero with descent from Poseidon, (2) Greek heroes receive prayers and sacrifices and grant supernatural blessings just the same way gods do, and (3) they credited him with introducing writing from Phoenicia, not inventing it out of whole cloth.
* In early records, sheep are not yet sheared - they're plucked. The sheep we have today aren't the sheep they had then.
> "Shearing" is actually a misnomer. The Akkadian term was "plucking." Before the end of the Bronze Age, domestic sheep did not continuously grow wool, and the wool could be combed or plucked when their coats shed in the spring.
Why would you think modern sheep make a relevant comparison point after I explicitly point out that they don't?
I've been programming Rust professionally for several years now and being totally honest I don't really understand Pin that well. I get the theory, but don't really have an intuitive understanding of when I should use it.
My usage of pin essentially comes down to "I try something, and the compiler complains at me, so I pin stuff and then it compiles."
It's never been such a hurdle in the day to day coding I need to do that I've been forced to sit down and truly grok it.
The problem isn't so much the lack of enums, it's more that there is no way to do exhaustive pattern matching at compile time. I have seen production systems go down because someone added a variant to an 'enum' but failed to handle that new variant everywhere.
Yeah, it makes a huge difference whether this is the default, and that's not a Go thing, it's not a programming language thing, it's the whole of human existence.
Literacy is an example. We're used to a world where it's just normal that other humans can make marks and interpret our own marks as to meaning. A thousand years ago that would be uncommon, ten thousand years ago vanishingly rare.
I really celebrate tools which take an idea that everybody agrees is a good idea, and bake it in so that everybody just has that, rather than agreeing it's a good idea but, eh, I have other things to do right now.
And it annoys me when I see these good ideas but they're left as an option, quietly for a few people to say "Oh, that's good to see" and everybody else misses out, as if "Literacy" was an optional high school class most of your peers didn't take and now they can't fucking read or write.
Example: Git has a force push feature, necessarily we can (if we have rights) overwrite a completely unrelated branch state, given any state X, now the state is our state Y instead. This isn't the default, that part is fine... Git also has "force-with-lease". This is a much better feature. Force-with-lease says "I know the current state of this branch is X, but I want to overwrite it anyway". If we're wrong, and X is not (any longer perhaps) the current state, the push fails. But force-with-lease isn't the default.
[Edited to fix clumsy wording in the last sentence]
The overarching lesson of my career has been that people are fallible and processes that rely on humans to not make mistakes are bound to fail. Recently I've also been thinking about the importance of being able to reason "locally" about code, it might be the single most important property of a software system. "locally" typically means a function, class, or module, but I think it can also be applied to "horizontal" cases like this. For example, if you add an enum variant the compiler should guide you to reason about each place where the enum is no longer exhaustively matched.
In theory yes. But practically there are always locations where we can't match every case, so we either have to live with a warning or add a catch-all arm. And as soon as a catch-all arm exists, we are in "not checked any more" state, but with a compiler that is supposed to check for exhaustiveness. Which is way worse if the catch-all arm isn't a `panic("Unhandled match branch!")`.
Yes, you can counter that with GADTs and match against exhaustive subsets. But to ergonomically handle these cases, you need something like Pattern Synonyms or you drown in boilerplate.
Same as above: this solution stops working as soon as you're handling 5 out of 50 cases (or, more realistically, 10 out of 200). Lexical tokens are which always trigger the mentioned problems in my code - often you match against subsets, as there are _way_ too many of them to add them all explicitly.
First of all, I think there is a problem with your design if you have 50 or 200 cases. Most of the code I have seen -- in any language -- have no more than 10 cases, rarely 20. Maybe that is the issue to look at first.
Then, this "limitation" is not an argument for not running the exhaustive check. In the vast majority of cases where there are about 5 enum entries and most cases need their own path, they would be explicitly written out (i.e. no _ =>), and this works extremely well. I have had good experience, and I believe other people can attest this.
If you only have 1 case matched and everything else goes in _, later needs to add one case, you just do that, likely in every other language, there is nothing that can help. But what's described above is already a big improvement.
> this "limitation" is not an argument for not running the exhaustive check.
That's not what I wanted to express. What I wanted to say is, that even when using Haskell, which has all the possibilities to actually handle matches of subsets quite ergonomically, you can't be sure that there isn't at least one čase which isn't caught by the exhaustiveness checker (and sometimes it's just wrong, but then we're not talking about enums). So you always have to check all manually, but the checker makes that easier.
This is actually the problém. I'd even say that it works like 95% of the time, that's why people (of course, I don't make such silly mistakes ;) aren't used to check where it matters.
In reality the policy must be to always (whether there are or aren't working exhaustiveness checks) manually check each and every occurence when adding. Don't get mé wrong, I prefer having exhaustiveness checks, but they make the manually searching a bit less tedious, not alleviate it in total.
I should have added that this solution stops working as soon as you're handling 5 out of 50 cases. Lexical tokens are which always trigger the mentioned problems in my code - often you match against subsets, are there are _way_ too many of them to add them all explicitly.
But you're not adding variants without a reason? You want them to have some effect.
It's hard for me to think of an example where it would make even sense to "having to remember to handle the variant" rather than "handling the desired effect of the variant".
People are stressed, get distracted, are tired, don't have complete knowledge etc. This is kind of like arguing that null pointers aren't a problem, you "just" have to check all usage of the pointer if you make it null. In practice we know solutions like this don't work
added that every switch/if should handle this exhaustively. For any project with more than a few dozens of files, it is basically impossible to remember all the downstream code that uses the enum -- you have to track it down, or better, let compiler automate check all usages
A few years ago I worked on a Solaris box that would lock the whole machine up whenever I grepped through the log files. Like it wouldn't just be slow, the web server that was running on it would literally stop serving requests while it was grepping.
My best guess is your grep search was saturating I/O bandwidth, which slowed everything else to a crawl.
Another possibility is that your grep search was hogging up your system's memory. That might make it swap. On my systems which do not have swap enabled but do have overcommit enabled, I experience out-of-memory conditions as my system essentially freezing for some period of time until Linux's OOM-killer kicks in and kills the offending process.
I would say the first is more likely than the second. In order for grep to hog up memory, you need to be searching some pretty specific kinds of files. A simple log file probably won't do it. But... a big binary file? Sure:
grep -a burntsushi /proc/self/pagemap
Don't try that one at home kids. You've been warned. (ripgrep should suffer the same fate.)
(There are other reasons for a system to lock up, but the above two are the ones that are pretty common for me. Well, in the past anyway. Now my machines have oodles of RAM and lots of I/O bandwidth.)
The link says that it supports only Fedora 38. Also, the main page for COPR says (in a small font): "NOTE: Copr is not yet officially supported by Fedora Infrastructure.". As I understand, it is the repository for packages uploaded by random anonymous users (not related to the authors of yabridge or Fedora).
That is correct. I didn't think specifically about Fedora 37; it's been a while since I upgraded to 38. I couldn't find F37 builds, even though that's around the time I tested yabridge. You might consider switching to 38 anyway, as 37 is less than two months away from it reaching EOL -- F39 release date (17 October) + 30 days.
> As I understand, it is the repository for packages uploaded by random anonymous users (not related to the authors of yabridge or Fedora).
That is mostly correct. It was not uploaded, but built on the Fedora infrastructure, following the RPM spec you can reach from the builds tab [1], for example the latest change located here [2].
There is an amount of trust you have to give to the copr author, but you can also check the rpm spec file [3]. Important quick checks are around the source0 lines.
> Also, the main page for COPR says (in a small font): "NOTE: Copr is not yet officially supported by Fedora Infrastructure."
Getting a package shipped into the Fedora base repositories seems rather bureaucratic and I understand any hacker that doesn't want to use their own time to deal with that.
if you construct a Vec with capacity 0 via Vec::new, vec![], Vec::with_capacity(0), or by calling shrink_to_fit on an empty Vec, it will not allocate memory.
So an empty vec![] is just a struct on the stack; very cheap to make, and easy for the compiler to optimize out if it can see that it's not used in some paths.
> Needlessly allocates a `Vec` if `self.payload` is `Some`.
Empty vecs don’t require a heap allocation, so your original code is actually fine. In release mode it should compile to exactly the same instructions as your last example.
No, because it requires an &Vec<_> and that doesn't implement Default and for good reason. Just ask yourself, where would the default empty vec live so that you could create a reference to it.
When using unwrap_or(&vec![]), it lives in the enclosing stack. Without the reference, you could use unwrap_or_default().
I lack Rust experience to comment on your suggestion but it does make me wonder if the author has provided a means for feedback in order to make this "More Effective Rust" (No pun intended if you're familiar with Meyers' followup book.)
No. The interviewer suggested that I get one year experience elsewhere, then reapply for the job. I started to do that, but a month into a Haskell gig, I received an offer to manage a deep learning team at Capital One, and pivoted.
What is the biggest resource bottleneck in Mastodon?
Like, are the certain design decisions that could be optimised to make it more resource efficient? Is it just because it is written in Ruby? Is there something inherent to ActivityPub that means it has to use a lot of resources? Can work be done to make it cheaper to run?
ActivityPub involves a HUGE number of HTTP requests. Every time you create a post your server has to deliver that post via HTTP to every other instance that has at least one of your followers - this can quickly grow to hundreds of deliveries if you have a few thousand followers.
Mastodon uses a Sidekiq queue and Ruby threads to handle these deliveries.
I have a strong hunch that switching to delivery through a more efficient mechanism - async IO, or a lower level language - could have a big impact there.
My web browser doesn't blink at loading 100 resources to render a web page, so delivering a few hundred HTTP requests shouldn't be impossibly hard to scale, even when you take retries into account.
I see. So essentially the issue is built in to the way Activity Pub is designed.
It is a scaling issue. Your browser may not blink at loading 100 resources, but say you were constantly reloading 100s of tabs, it would start to grind down. Rewriting in a more efficient language would help, but that would just mean it would start to grind down when you have 20,000 users rather than 10,000.
I'm surprised it doesn't do some kind of batching peer to peer communication instead. Maybe that wouldn't be as real time.
The difficulty with a lot of dynamic languages is that it is very difficult to determine at compile time what parts of the runtime are not going to be invoked so you largely end up having to include the entire platform in the executable.
reply