Even granting all that you say is true, it would be trivial for there to be bias in such an apparently rigorous process. All that is required is selective application of the rules.
You’re presuming that the debate has to be carried out according to utilitarian rules (do benefits of free speech outweigh harms caused by certain speech). But why should it be?
This culture was, imo, directly responsible for google's failure to launch a facebook competitor early enough for it to matter.
The Orkut project was basically banned from being launched or marketed as an official google product because it was deemed "not production ready" by SRE. Despite that it gained huge market share in Brazil and a few other countries before eventually losing to FB. By the time their "production ready" product (G+) launched it was hilariously late.
Facebook probably would have won anyway, but who knows what might have happened if Google had actually leaned into this very successful project instead of treating it like an unwanted step-child.
How was it banned from being launched? It did launch and the desire to not be promoted as a Google product came from Orkut himself, iirc.
The reason it was not regarded as 'production ready' was that the architecture didn't scale. In fact it also didn't run on the regular Google infrastructure that everything else used and that SRE teams were familiar with; it was a .NET app that used MS SQL Server.
This design wasn't a big surprise. Facebook won not because Orkut lost but because Facebook were the first to manage gradual scaleup without killing their own adoption, by figuring out naturally isolated social networks they could restrict signup to (American universities). This made their sharding problem much easier. Every other competitor tried to let the whole world sign up simultaneously whilst also offering sophisticated data model features like the ability to make arbitrary friendships and groups, which was hard to implement with the RDBMS tech of the time.
Orkut did indeed suffer drastic scaling problems and eventually had to be rewritten on top of the regular Google infrastructure, but that just replaced one set of problems with another.
The attitude within SRE toward Orkut (the product) was one of disdain if not contempt. A healthy culture does not treat rapidly growing products this way.
I mean, I'm personal friends with a former Orkut SRE. The idea that Google SRE ignored or disdained Orkut just isn't right. Nonetheless, if your job is defined as "make the site reliable" and it's written in a way that can never be reliable then it's understandable that you're going to have at least some frustrations.
Of course restricting rollout to American university emails (including alumni addresses--at least at one point) was also a pretty natural consequence of Facebook's origins.
The overall point is it’s impossible to make a perfect spec. Rust doesn’t even have a spec, so your behavior is whatever the compiler gives you, which is C++ UB too.
Undefined behavior means something specific in the language specification world.
The language specification, or standard, guarantees certain things about the behavior of programs written to the specification. "Undefined behavior" means "you did something such that the guarantees of this specification no longer apply to your program." It's pretty much a worst case scenario in terms of writing programs. The program might do... anything. Fortunately, in reality it happens all the time and programs often keep behaving close enough to what we expect.
Turing completeness is unrelated to that sense of "undefined behavior".
> I understand and my point is rust has no “ub” only because there is no spec, not because it avoids inherent computing problems.
Well, your point is wrong because UB is not an inherent computing problem. That's what the post above tried to explain.
Many forms of UB are inherent to C-like languages, but languages don't have to be C-like.
> For example infinite template recursion is undefined. Specifying any other behavior is impossible due to halting problem.
A language can avoid this by not having infinite template recursion.
C++ currently allows infinite recursion at the language level, while acknowledging that compilers might abort early and recommending that 'early' is a depth of 1024 or higher. But a future version could bake that limit into the language itself, removing the problem.
> Another example: a system might be able to detect out of bounds pointer deref, or maybe not. Same with signed integer overflow.
A language can avoid out of bounds deref in many ways, one of which is not allowing pointer arithmetic.
Signed integer overflow is trivial to handle. I'm not sure what problem you're even suggesting here that the person in charge of the language spec can't overcome. C++'s lack of desire to remove that form of UB is not because it would be difficult.
> A language can avoid this by not having infinite template recursion.
How does it know whether a definition is infinitely recursive? This IS the halting problem.
> But a future version could bake that limit into the language itself,
In other words, take away the Turing completeness of templates. Which goes back to my original comment.
Also note that limiting recursion hurts real world use case (types getting arbitrarily complex in a program over time) in favor of theoretical benefit (now you can say it’s not UB).
> pointer Deref
Let me explain again. In a language with pointers checking whether a deref is valid requires comparing every address to every allocation bounds. That’s ridiculously expensive.
The only solution is to take away pointers (Java, C#, etc) OR do what C does, crash on obviously bad derefs. Since “obviously bad” depends on the implementation (maybe you are a safety sadist and you want 2000 instructions per deref) the standard cannot guarantee any behavior. Maybe it crashes, maybe you get “lucky” and it doesn’t notice.
The only way to avoid UB is to limit expressiveness (pretend addresses don’t exist), and all Turing complete languages have UB.
I have more responses, but you’re not grasping the ones I already made.
> In other words, take away the Turing completeness of templates. Which goes back to my original comment.
Template halting is only a correctness issue because it's done at compile time. Turing completeness is not a problem in general, and limiting the amount of computation at compile time is fine.
> The only solution is to take away pointers (Java, C#, etc) OR do what C does
Something along those lines.
> The only way to avoid UB is to limit expressiveness (pretend addresses don’t exist), and all Turing complete languages have UB.
The only way to avoid UB is to limit expressiveness, and NOT all Turing complete languages have UB.
Pointers are a big thing to restrict for a safe language. But you really don't have to do that much else. Whether a program halts or doesn't halt at runtime isn't a safety issue, there's no UB involved. It just runs indefinitely.
It is darkly amusing to me that it took the FDA well over a decade to conclude what was immediately obvious to anyone who has ever tried phenylephrine: that it is not worth taking
People always like to say "thinking, not typing, is the bottleneck". This is a totally wrong way to think about it, because for the most part you don't think while you type. You think, then you type. They happen at different times! The more time you spend typing, the longer it is before you can start thinking again.
> because for the most part you don't think while you type.
I think that isn’t true if you’re a good enough typist. Once typing is automated enough, you can (somewhat) multitask.
It is similar to driving a car. If you just learned driving, you can’t do anything else while driving, but if you gain experience, you learn to detect when the road needs you full attention and when it doesn’t, and can talk or think about other things while driving.
I think being able to think of what to write next while typing may be even more important for a fluent conversation than typing faster. It will be difficult to test whether that’s true, though, because automating the act of typing also makes you type faster.
As an analogy, when you're having an in-person conversation, how often do you pause and think before responding, versus forming your thought and speaking it in real-time? There's definitely times to stop and think for a moment before replying, but there's also times where I'm speaking my thoughts as it's formed.
Likewise, if you're comfortable and quick and "at home" with typing, then I'd argue there's no reason you can't type as you're forming your thought just like you speak as you're forming your thought.
(Admittedly I'm typing this on my phone, so not at the speed of my thoughts right now, but I know for sure I've done it over Slack with my coworkers and group chats on Telegram, especially when shooting short quick questions and answers back and forth)
Not only is typing speed not a significant factor, neither do I find that thinking speed is either. The most important thing is what to think about and what to notice. Going in the right direction and not changing directions frequently and arbitrarily is what more than makes up for speed.
Knowing your tools and underlying tech is also a must and the better you know it the better your intuition will be.
> You think, then you type. They happen at different times!
Really? I can certainly type while thinking cause I did so while writing this very sentence, and I have assumed that they are pipelined and improving one without another is meaningless up to some threshold. I should note that I don't think in English, but I can pull English words incrementally from my mind so I don't think that matters much anyway.
Learning to touch type is all about automating the typing, reducing the cognitive load. The load is reduced to a point where one is thinking ahead a few words and the typing sub-process (as in sub-conscious) works the queue. That queue can be dozens of characters long. But stick one non-touch char in there and the queue blocks until that char is cleared.
I've been orienteering and the only time I'd stop is when the terrain didn't allow safely reading the map while in a jog. Or when I encounter an unexpected landmark, indicating I lost my orientation. Otherwise everything is planned while moving towards and scanning for the next waypoint in the queue.
I don't know how much resource is shared for physical and mental activies (my wild guess is "a bit but not much", but I don't have any backing evidence), but I'm sure that important decisions are exceptions rather than norms.
Aside from fossil water, all of our fresh water comes from desalinated sea water, transported inland by clouds - which shows that there is no brine problem as long as the brine is dispersed widely enough in the sea. “How widely” is enough is something I wish OP discussed.
What are you talking about? Would facebook allow an account that posted some else's realtime location? No, probably not. Does facebook claim to adhere to free speech absolutism? No. What principle are they applying inconsistently here?
This fails to respond to the point made in the reply you're responding to. Posting someone's location for potential stalkers is different from having their location. Posting it is consistently against Facebook's rules.
The person you're responding to is asserting that the privacy of the jet owners apparently matters, but the privacy of Meta users doesn't (presumably due to the way they do business etc.)
As far as I can tell the level of analysis being applied here is "Zuck is bad, therefore tracking his plane is good, therefore banning anyone who tracks his plane is bad."
reply