The real problem cannot be solved by re-modelling in a different high-level language, because the real problem with that code is it gets `pss` from a library call and `concurrencyProcesses` from a global variable, when they should both be function parameters a test case can exercise.
Google’s gofuzz  and dvyukov’s go-fuzz  libraries have been instrumental for my success every time I embark on a journey to write a complex system with concurrency in its heart. The list of bugs I have found in my co-workers programs was so big we decided to spend a month configuring the fuzzing tests and our continuous integration environment to do it automatically. Code quality increased by an order of magnitude and we are now more confident about the code we ship to production.
TLA is math, not a typical programming language.
If it's the latter, then TLA+ isn't the right tool, and neither is SPIN. There are other tools for verifying language semantics. If it's the former -- i.e. the author of the spec knows the meaning of each line and its intent -- then TLA+ is a very good tool, but then a simpler spec could suffice, perhaps something like this: https://pron.github.io/files/Channels.pdf
The question we're asking is how best to get people to use TLA+. Do you show them what model-checking can do and then show that TLA+ can model virtually anything and TLC can check many things, and do so at the code-level because that is something they already know from programming -- but then they ask, as they do in this discussion, why you need TLA+ at all if all you're doing is translating Go to some other language -- or do you first try to show them the TLA+'s real strength, which is mathematical modelling of systems, but that is a new concept that is different from programming and can be foreign to them?
It's a hard problem, and maybe the right approach is to do both, which is what you're doing if we take this post together with others you've written. But I think it's worth mentioning that in this case, this is not the spec someone who knows both TLA+ and Go well would likely write.
BTW, I'm trying to remember which of those two features sold me on TLA+, and I think it was the combination of both. I knew the limitations of model-checkers (and had worked with NASA's Java model-checker before) that I knew that a model-checker alone wouldn't give me what I need for the complex distributed system I was working on, and mathematical modelling alone of something so subtle wouldn't have sufficed either without some help checking (cheaply!) if my ideas were right or wrong. So neither aspect would have sold me in isolation, I think.
The translation process itself could be error prone. The disintegrated concepts requires considerable amount of mental effort.
I am very interested in using the host language (the language that used to write the software one wants to verify) compiler to compile into a TLA+ spec.
And in the case of the example with Go as the implementation language (in the linked article), many of the details (like how channels work) can be reused between specs. This permits analysis of Go programs generally, without needing to respecify Go's concurrency mechanisms every. single. time.
Now architecture and design is a very vague domain. It could be anything from specific algorithms all the way up to entire distributed systems (and indeed TLA+ can be used for any of those), but the point is that it is a separate entity from code.
I also think in the current formal methods environment this is where TLA+ can make the most impact. It is a far less risky business proposition to use TLA+ to record design decisions where the maturity of its toolchain has minimal impact on your actual codebase vs using a formal methods toolchain that directly affects the code you write, where the maturity of that toolchain is an extremely important business decision.
To apply your claim to how TLA+ is best is to say that the problem with architecture is the high cost of having two (or more) representations of the building, once in steel and concrete and once as a blueprint, and conclude it would be better to not have a blueprint at all.
The spec is supposed to focus on different things than your code and show the bigger picture. While it would be great to then verify the correctness of your actual program, humanity currently does not possess this capability. The biggest programs ever verified "end-to-end" were ~10KLOC, or 1/5 the size of jQuery, and required years of work by world experts.
I think people see TLA+ as an escape from this construction nightmare - because its model is so simple and elegant. I know my wheels have been spinning trying to think about the best way to unify the design and construction processes. So I definitely don't fault people for trying to specify lower and lower levels.
Out of curiosity, what programs are you referring to? seL4 seems to fit the description.
In a way, you could make the same argument about writing tests: the code paths you exercise in your CI environment are not quite the same as what's running in prod, and neither is the environment in which the code is running (e.g. the load your process or host is under, the memory pressure, etc).
That said, don't let this small gap and the lack of a perfect solution discourage you from using model checking, there is still a lot to gain from adding it as one more tool in your toolbox.
That said, some languages offer more capabilities and tooling to support this. Check out Frama-C and its Mthread plugin  (NB: I have no experience with either) for something that could help. In Ada there's SPARK, but I'm not sure the extent that it helps with verifying concurrency properties.
In some languages, like Erlang, Elixir, and Go, the concurrency mechanism is sufficiently well integrated into the languages that it becomes very clear while reading the code what's happening (especially if you properly segment your code into functions and modules to aid in readability). These can make the analysis much easier to perform.
And regardless of language, if you have a specification and design the code tends to come out much cleaner, which makes the manual task of reviewing and verifying these critical parts less onerous. Additionally, it's not as hard to conceive of tests when you have the simpler spec to look at versus the much larger code constructed from it.
I mean, both tasks are writing, but the outline is on another abstraction level and makes the article a breeze to write.
Trying to compile a specification from the implementation would be equivalent to trying to do "top-down" via "bottom-up".
I mean, it would be slower if you'd spend time writing mechanically-checked formal proofs, but that's working 10x for just slightly more confidence than you can get with TLC (a model-checker for TLA+) alone, so in almost all cases people just stick to specifying and model-checking.
TLA+'s place is as part of your design process (you have one, right?). It's the thing you do before committing any code to the project or radically restructuring an existing project (prototype code fits in the design process, but it shouldn't be considered committed most of the time). By spending a bit of time thinking about the design, and using tools like TLA+ to validate that design, you can save yourself months and years of heartache down the road.
By way of anecdote, some colleagues were working on a concurrent algorithm, needing to share data between two embedded processors talking over a databus. A bit more time spent on the design, versus the rather slapdash approach they took, would've saved them 6 months of debugging and tweaking until it worked. The issues were subtle, so it "worked" but wasn't totally correct and tracing the errors they were seeing back to that code was non-obvious (and hard, debugging embedded isn't trivial). It was quite literally a case where skipping a couple weeks of design cost them months later.
I think you dramatically underestimate the amount of effort required to produce a formal model of any nontrivial system, and overestimate the stability of system definitions in response to changing business requirements.
The sort of things for which you write specs are generally "core" functionality that doesn't change very often. If people are constantly mucking around in that area without writing specs, your system will be a pile of crap.
That discovery is actionable, many other bug discoveries are not without further investigation.
 This statement should be qualified with: Especially if it's at all novel to you, your team, your organization, or the world. And the amount of time to spend varies based on its true novelty. A team that's made hundreds of RoR sites can churn a new one out in a week with minimal upfront design effort. A team that's making the next RoR should probably spend more time thinking about what that means before making, or committing code to, it.
I believe my mistake was due to the fact that most articles on the subject start by showing you a faulty piece of code.