Picking a language for the Chrome team doesn't seem practical - we all know where your question is going to head.
The point is they have to not pick C++. Again, they've invested many, many millions of dollars into security. Let's not pretend that they're priced out of using another language.
Now one could argue for Rust, but let's not pretend that C++ was a bad choice. C++ was the overwhelmingly best choice at the time.
What I'm saying is that Chrome is an example of a project with more security funding than just about any other project out there, and it still can't save users from the footguns of C++.
I guess the question used the wording "should have written it in?", to which I said "a memory safe one", and that's not true. I'm not interested in making a judgment call on their initial choice - I totally get the decision, regardless.
You can certainly make that argument but this doesn't seem to really be a compelling example. They had one security issue caused by a use-after-free over a decade. That's... a pretty fucking stellar track record and does not at all scream "you cannot use this language safely!!!", now does it?
I would certainly argue for something like Rust instead if starting from scratch, though, but just because A is better than B doesn't make B unusable trash, either.
My argument is:
* We have a production, widely used codebase
* This codebase is in C++
* This codebase has probably the most significant efforts to secure it of anything of its size, or at least it's in the top 5
* This codebase suffered enouguh critical vulnerabilities for users to be attacked in the wild
So let's be clear - Chrome suffers from thousands of memory safety vulnerabilities, but not many make it to ITW exploits. This case is special because users actually suffered because of it. It isn't fair to say they've had "one security issue", certainly, just one that's had major user impact.
I'm not judging C++ or the choice to use it. I am saying that this is an example of a very hardened system not being able to compensate for memory unsafety.
This is a bit of a simplistic view tbh. They have managed to drive up costs for 0 days on the black market. They have introduced auto-updating browsers to get security patches to users quickly. They have managed to reduce the number of security exploits. This is a big feat and major progress.
They have compensated for memory unsafety quite effectively. Maybe not perfectly, but well enough.
Yes, memory safe languages are wonderful. I myself am a rustacean and LOVE to RIIR. But their efforts weren't in vain.
Should they start thinking about adopting a memory safe language like Rust or Wuffs? Definitely.
It's a very literal one.
I think I've been very open about what an accomplishment they've made, and how proud they should be to have kept Chrome users safe for all of these years.
It is exactly because their efforts to secure Chrome are so impressive that I think this event is interesting.
None of what I've said is about Rust or even a suggestion that they shouldn't keep doing what their doing. One ITW exploit in a lifetime of a product is a great track record.
It is merely interesting to observe when herculean efforts fall down.
The choice of language 15 years ago has nothing to do with the languages that could be in use today on the same project.
Rust has `unsafe` all over the place; it's no more safe if you use that keyword to do the same things that are done in C++.
Get more compiler people on Go and it will be even faster than it is now (really fast), and Go is written in Go so there's no reliance on C++, unlike Rust.
Yes, it is, because the language has the concept of safe code to begin with. The problem with C++ is that all C++ code could potentially be unsafe.
> Get more compiler people on Go and it will be even faster than it is now (really fast), and Go is written in Go so there's no reliance on C++, unlike Rust.
This is a very confused comment. Rust is also written in Rust. If you're thinking of LLVM, sure, the Rust compiler uses that, but LLVM is nowhere to be found in products written in Rust that are shipped to users. What matters is how safe the runtime is, and there's no real difference between Go and Rust in this regard. Go might theoretically have some kind of edge over Rust in safety in that it calls syscalls directly instead of going through libc, but I highly doubt it matters in practice, because (a) the libc wrappers are a drop in the bucket compared to the complexity of the kernel; (b) Go only does that on Linux these days anyhow.
In fact, I would say Rust is typically more safe than Go, because in Rust you can mark any code as unsafe, it doesn't necessarily have to involve unsafe pointers. For example in Rust FFI functions are typically marked unsafe even if they don't involve pointers. There's no way to do that in Go.
Another example is strings: Rust ensures that strings are always valid UTF-8 by marking any function that could break that as unsafe. OTOH in Go you strings can contain invalid UTF-8 if I recall correctly.
> it's no more safe if you use that keyword to do the same things that are done in C++
Sure, but using 'unsafe' everywhere that you would write unsafe C++ code simply isn't idiomatic Rust and except for very specific and limited situations, Rust developers don't do that.
You'll still be stuck with the fundamental tradeoffs that Go made like being GC'd and not having a fast FFI.
Those tradeoffs make sense in the target audience of Go (servers), but it's not going to make sense in something like a web browser.
When Go 1.8 released, it measured < 1ms garbage collection times on an 18GB heap, and that number has improved in the 2 years and 4 releases since then.
I don't believe that "it uses garbage collection" is a valid complaint against Go anymore, except in a real-time application, and I don't know of any real-time applications written in Go. I'm sure there are some, I just don't know of them.
Fastest by what measure? I benchmarked some key value store implementation written in golang and ran into large (> 1 sec) latency spikes because the golang gc couldn't keep up with the large (10GB+) heap space. Java would have allowed me to select a GC algorithm that's best for my use case and I wouldn't have run into that issue.
golang's gc is tuned for latency at the expense of throughput, that's basically it. It's not some magic bullet that solved the gc issue, contrary to what the golang marketing team wants people to believe.
If for your workload GC pauses are tolerable and throughput is important (say when doing offline CPU bound batch processing) then the Go GC is not optimal for your use case and will be slower than other options on the market.
Some runtimes (such as the JVM) allow the user to pick from one of many GCs so that they can use the one that is most appropriate for their workload.
GC's prevent you from doing a huge range of techniques to improve cache locality or reduce allocation/free churn.
A simple example would be games will do things like have just a big allocation for all a frame's temporary data, and they just bump-pointer allocate from it. Then when the frame is over, they reset back to zero. It is already The Perfect GC. Bump pointer allocation, zero pause time, and perfectly consistent memory usage. No matter how good Go's GC gets it will always be slower than that.
At the other end of things you have things like LLVM's PointerUnion, where alignment requirements are (abused) to cram a type id into the lower bits of the pointer itself. Type-safe variant in the size of a void*.
This is only true in languages which don't provide other means of memory allocation.
D, Modula-3, Mesa/Cedar, Oberon variants, Eiffel, .NET all provide features for such techniques.
Android team also makes use of Go for their OpenGL/Vulkan debugger.
Now, lack of generics is really a big pain point.
Neither of those examples disagree with what I said. Fuschia's usage is the perfect example of agreement - Go specializes in IO (server workloads) and is then being used to do IO. That's a great usage of Go's particular blend of capabilities.
The use of GO in GAPID would be more interesting if it had any meaningful constraints on it, but it doesn't. It's an offline debugger for something that ran on a device with a fraction of the speed.
RenderDoc would be the meatier Vulkan debugger, and it's C++.
As early C++ adopter I always find ironic how people give examples of C++'s code generation performance.
Back in the old days, tying to use C++ would generate exactly the same kind of performance bashing.
Saying Go has it's head up it's ass is extremely disrespectful to the people that created it and are maintaining it. You lost all credibility when you chose the low road and said that.
The only thing left to do is to propose a system that has reasonable trade-offs, doesn't suck, and doesn't completely break the existing language. Easy peasy!
I wonder who is doing Go more a disservice - people who hate everything about, or the blind fanatical devotees who think any criticism towards any aspect of Go is heretical.
But you carry on using empty interface with runtime type assertions and reflection if you think that's faster.
I sincerely apologise for wasting your time by typing something that was true as I understood it.
Clearly you've never made a similar mistake.
Not interested in your snarky "apology".
You have no idea what you're talking about.
> Saying Go has it's head up it's ass is extremely disrespectful to the people that created it and are maintaining it. You lost all credibility when you chose the low road and said that.
No it's not disrespectful, it is entirely true. Do consider the origins of Go, where it all started as an experiment in combining bad decisions into a single programming language to see what would happen. What those very people didn't realize upon releasing it to the world is the sheer amount of people who fell for it, it was supposed to be a joke, with a stupid looking mascot and all... Now they have to take it seriously - and Google has to choose between keeping it alive or getting a forever bad rep for killing it - because too many companies rely on it, and they are forced to retrofit useful features on top of the giant pile of garbage they've created.
Yes, it is disrespectful, and no, it is not entirely true.
If you want to talk literally, a team has no ass to shove things into.
If you want to talk figuratively, making mistakes does not constitute having your "head up your ass." Even making multiple mistakes doesn't warrant that kind of statement. It's rude and it's pointless and doesn't add anything at all except to make you look asinine. So now you're just as asinine as I am with my misunderstandings. Well done.
> If you want to talk figuratively, making mistakes does not constitute having your "head up your ass." Even making multiple mistakes doesn't warrant that kind of statement.
Making deliberate mistakes multiple times and resisting fixing them for 10 years does qualify as having their "head up their ass" (or asses if that's what you prefer).
Personally, I think Go is terrible. However, I can't imagine Google is even remotely considering scrapping Go. It is massively popular. It is used all over the place both externally and internally. It's a huge branding asset, community outreach tool, recruiting tool and provides leverage in the form of first party libraries over the direction that software development as a whole is going.
No, because Go really has its head up it's ass.
This does not match my experience nor numbers I’ve seen. What leads you to say this?
(Also, other than LLVM, Rust is written in Rust. And when cranelift lands, you could remove that too.)
I've never heard of this happening in practice, however.
Of course, data races are usually race conditions too, meaning that even if the memory corruption were prevented, there would probably still be a correctness bug to worry about, so I can understand why the Go authors chose the tradeoffs that they did.
Edit: Reread your comment and I get the part that is specific to interfaces now. This seems harder to exploit than, for example, a read of a slice header being... sliced... resulting in an out of bounds access.
The Chrome team is great but when you're surrounded by people who think they're god damn gods on Earth it's hard to question orthodoxies, especially old ones.
Program in GNU Pascal or some such thing? Not us. Pedal to the metal!
D for example would have been an interesting choice (and I think it was usable back then).
D has some interesting bits, but it did a lot of head-scratching stuff too. It seemed really confused about what target it was going after, like some middle ground between C#/Java & C/C++ that really doesn't seem to exist.