Hacker News new | past | comments | ask | show | jobs | submit login

A memory safe one. There are many of them. They could build their own if they chose to - they've built multiple languages in the past.

Picking a language for the Chrome team doesn't seem practical - we all know where your question is going to head.

The point is they have to not pick C++. Again, they've invested many, many millions of dollars into security. Let's not pretend that they're priced out of using another language.




Back when Chrome was getting started there were no memory safe languages that did not come with huge downsides.

Now one could argue for Rust, but let's not pretend that C++ was a bad choice. C++ was the overwhelmingly best choice at the time.


I didn't say C++ was a bad choice and I've purposefully avoided making this another language war issue.

What I'm saying is that Chrome is an example of a project with more security funding than just about any other project out there, and it still can't save users from the footguns of C++.

I guess the question used the wording "should have written it in?", to which I said "a memory safe one", and that's not true. I'm not interested in making a judgment call on their initial choice - I totally get the decision, regardless.


> What I'm saying is that Chrome is an example of a project with more security funding than just about any other project out there, and it still can't save users from the footguns of C++.

You can certainly make that argument but this doesn't seem to really be a compelling example. They had one security issue caused by a use-after-free over a decade. That's... a pretty fucking stellar track record and does not at all scream "you cannot use this language safely!!!", now does it?

I would certainly argue for something like Rust instead if starting from scratch, though, but just because A is better than B doesn't make B unusable trash, either.


Well, that's not really a fair interpretation of my argument, I'd say.

My argument is:

* We have a production, widely used codebase

* This codebase is in C++

* This codebase has probably the most significant efforts to secure it of anything of its size, or at least it's in the top 5

* This codebase suffered enouguh critical vulnerabilities for users to be attacked in the wild

So let's be clear - Chrome suffers from thousands of memory safety vulnerabilities, but not many make it to ITW exploits. This case is special because users actually suffered because of it. It isn't fair to say they've had "one security issue", certainly, just one that's had major user impact.

I'm not judging C++ or the choice to use it. I am saying that this is an example of a very hardened system not being able to compensate for memory unsafety.


> not being able to compensate for memory unsafety.

This is a bit of a simplistic view tbh. They have managed to drive up costs for 0 days on the black market. They have introduced auto-updating browsers to get security patches to users quickly. They have managed to reduce the number of security exploits. This is a big feat and major progress.

They have compensated for memory unsafety quite effectively. Maybe not perfectly, but well enough.

Yes, memory safe languages are wonderful. I myself am a rustacean and LOVE to RIIR. But their efforts weren't in vain.

Should they start thinking about adopting a memory safe language like Rust or Wuffs? Definitely.


> This is a bit of a simplistic view tbh.

It's a very literal one.

I think I've been very open about what an accomplishment they've made, and how proud they should be to have kept Chrome users safe for all of these years.

It is exactly because their efforts to secure Chrome are so impressive that I think this event is interesting.

None of what I've said is about Rust or even a suggestion that they shouldn't keep doing what their doing. One ITW exploit in a lifetime of a product is a great track record.

It is merely interesting to observe when herculean efforts fall down.


This is not the first chrome security issue in over a decade.


Not by a long shot either. The current stable version of Chrome (72) came with 58 security fixes, including a dozen UAFs, albeit several of those are in third party dependencies.

https://chromereleases.googleblog.com/2019/01/stable-channel...


There was even another UAF just a few months ago. https://chromereleases.googleblog.com/2018/11/stable-channel...


I wonder what are the huge downsides of Modula-2, Ada, Delphi, ....


C++ is a write only language. Too many complex features that are used in ways that only the author of the code understands at the time of writing - or thinks of understanding at the time of writing even.


That doesn't mean that they (or anyone) should still be using it.

The choice of language 15 years ago has nothing to do with the languages that could be in use today on the same project.

Rust has `unsafe` all over the place; it's no more safe if you use that keyword to do the same things that are done in C++.

Get more compiler people on Go and it will be even faster than it is now (really fast), and Go is written in Go so there's no reliance on C++, unlike Rust.


> Rust has `unsafe` all over the place; it's no more safe if you use that keyword to do the same things that are done in C++.

Yes, it is, because the language has the concept of safe code to begin with. The problem with C++ is that all C++ code could potentially be unsafe.

> Get more compiler people on Go and it will be even faster than it is now (really fast), and Go is written in Go so there's no reliance on C++, unlike Rust.

This is a very confused comment. Rust is also written in Rust. If you're thinking of LLVM, sure, the Rust compiler uses that, but LLVM is nowhere to be found in products written in Rust that are shipped to users. What matters is how safe the runtime is, and there's no real difference between Go and Rust in this regard. Go might theoretically have some kind of edge over Rust in safety in that it calls syscalls directly instead of going through libc, but I highly doubt it matters in practice, because (a) the libc wrappers are a drop in the bucket compared to the complexity of the kernel; (b) Go only does that on Linux these days anyhow.


Yup, and besides, Go has the unsafe (https://golang.org/pkg/unsafe/) package, which allows for basically the same sort of unsafety as Rust's unsafe.

In fact, I would say Rust is typically more safe than Go, because in Rust you can mark any code as unsafe, it doesn't necessarily have to involve unsafe pointers. For example in Rust FFI functions are typically marked unsafe even if they don't involve pointers. There's no way to do that in Go.

Another example is strings: Rust ensures that strings are always valid UTF-8 by marking any function that could break that as unsafe. OTOH in Go you strings can contain invalid UTF-8 if I recall correctly.


I thought Rust was written in C++, apologies.


My project currently has 104366 lines of Rust code and 174 uses of 'unsafe' (many of which are somewhat spurious, like usage of `mmap` and other system calls that don't have safe wrappers in external crates). My project does a lot of low-level stuff (e.g. ptracing, sharing memory with C++ code in other processes) and has lots of crazy low-level optimizations but still only has one use of 'unsafe' per 600 lines of code. I haven't made any particular effort to reduce usage of 'unsafe' either.

> it's no more safe if you use that keyword to do the same things that are done in C++

Sure, but using 'unsafe' everywhere that you would write unsafe C++ code simply isn't idiomatic Rust and except for very specific and limited situations, Rust developers don't do that.


> Get more compiler people on Go and it will be even faster than it is now (really fast), and Go is written in Go so there's no reliance on C++, unlike Rust.

You'll still be stuck with the fundamental tradeoffs that Go made like being GC'd and not having a fast FFI.

Those tradeoffs make sense in the target audience of Go (servers), but it's not going to make sense in something like a web browser.


Go has a garbage collector, and it's the fastest garbage collector that currently exists, by a wide margin, I believe.

When Go 1.8 released, it measured < 1ms garbage collection times on an 18GB heap, and that number has improved in the 2 years and 4 releases since then.

I don't believe that "it uses garbage collection" is a valid complaint against Go anymore, except in a real-time application, and I don't know of any real-time applications written in Go. I'm sure there are some, I just don't know of them.


> Go has a garbage collector, and it's the fastest garbage collector that currently exists, by a wide margin, I believe.

Fastest by what measure? I benchmarked some key value store implementation written in golang and ran into large (> 1 sec) latency spikes because the golang gc couldn't keep up with the large (10GB+) heap space. Java would have allowed me to select a GC algorithm that's best for my use case and I wouldn't have run into that issue.

golang's gc is tuned for latency at the expense of throughput, that's basically it. It's not some magic bullet that solved the gc issue, contrary to what the golang marketing team wants people to believe.


Go's garbage collection is optimised for reducing GC pause times, and while it is extremely fast in this regard this is only one of many ways GC performance can be measured.

If for your workload GC pauses are tolerable and throughput is important (say when doing offline CPU bound batch processing) then the Go GC is not optimal for your use case and will be slower than other options on the market.

Some runtimes (such as the JVM) allow the user to pick from one of many GCs so that they can use the one that is most appropriate for their workload.


A fast GC doesn't mean the code wouldn't have been faster without a GC.

GC's prevent you from doing a huge range of techniques to improve cache locality or reduce allocation/free churn.

A simple example would be games will do things like have just a big allocation for all a frame's temporary data, and they just bump-pointer allocate from it. Then when the frame is over, they reset back to zero. It is already The Perfect GC. Bump pointer allocation, zero pause time, and perfectly consistent memory usage. No matter how good Go's GC gets it will always be slower than that.

At the other end of things you have things like LLVM's PointerUnion, where alignment requirements are (abused) to cram a type id into the lower bits of the pointer itself. Type-safe variant in the size of a void*.


> GC's prevent you from doing a huge range of techniques to improve cache locality or reduce allocation/free churn.

This is only true in languages which don't provide other means of memory allocation.

D, Modula-3, Mesa/Cedar, Oberon variants, Eiffel, .NET all provide features for such techniques.


Fuchsia team thinks otherwise, where kernel level stuff like the TCP/IP stack and IO volume handling are written in Go.

Android team also makes use of Go for their OpenGL/Vulkan debugger.

Now, lack of generics is really a big pain point.


> Fuchsia team thinks otherwise, where kernel level stuff like the TCP/IP stack and IO volume handling are written in Go. > Android team also makes use of Go for their OpenGL/Vulkan debugger.

Neither of those examples disagree with what I said. Fuschia's usage is the perfect example of agreement - Go specializes in IO (server workloads) and is then being used to do IO. That's a great usage of Go's particular blend of capabilities.

The use of GO in GAPID would be more interesting if it had any meaningful constraints on it, but it doesn't. It's an offline debugger for something that ran on a device with a fraction of the speed.

RenderDoc would be the meatier Vulkan debugger, and it's C++.


GAPID is an online debugger and a TCP/IP stack needs to be really fast.

As early C++ adopter I always find ironic how people give examples of C++'s code generation performance.

Back in the old days, tying to use C++ would generate exactly the same kind of performance bashing.


Sorry. Until Go gets its head out of its ass and gets proper generics Go will still be painful to use and slow. Not to mention with all the extra code complexity from writing everything with interface{}{} and reflection there are bound to be plenty of exploitable vulnerabilities. I honestly think it is worse than pre-generic Java because switching over the type of the variable is encouraged. Most sane type-checked languages kinda expect you to know what type you're working with at compile time.


Generics are slower, that's the trade-off. Developer time vs. execution time.

Saying Go has it's head up it's ass is extremely disrespectful to the people that created it and are maintaining it. You lost all credibility when you chose the low road and said that.


Look, there are core Go developers who admit that parametric polymorphism is a good idea. In fact, I'm not aware of anyone on the Go team arguing against that. Which is what you mean by "generics", right? In fact Go already has "generics" - maps, slices, channels. Are those "slow" in your opinion?

The only thing left to do is to propose a system that has reasonable trade-offs, doesn't suck, and doesn't completely break the existing language. Easy peasy!

I wonder who is doing Go more a disservice - people who hate everything about, or the blind fanatical devotees who think any criticism towards any aspect of Go is heretical.


You lost all credibility when you wrote that first sentence.

But you carry on using empty interface with runtime type assertions and reflection if you think that's faster.


Well, pardon me for stating things as I understand them and making a mistake in doing so.

I sincerely apologise for wasting your time by typing something that was true as I understood it.

Clearly you've never made a similar mistake.


It was not true yet you stated it with certainty as fact.

Not interested in your snarky "apology".


Most generics work by monomorphisation, which basically means the compiler is doing a copy paste from int_btree.go to string_btree.go with the types changed. There is no runtime cost except text segment bloat. Generally modern compilers are smart enough to unify the identical machine code paths too, so you might not even get the code bloat.


> Generics are slower, that's the trade-off. Developer time vs. execution time.

You have no idea what you're talking about.

> Saying Go has it's head up it's ass is extremely disrespectful to the people that created it and are maintaining it. You lost all credibility when you chose the low road and said that.

No it's not disrespectful, it is entirely true. Do consider the origins of Go, where it all started as an experiment in combining bad decisions into a single programming language to see what would happen. What those very people didn't realize upon releasing it to the world is the sheer amount of people who fell for it, it was supposed to be a joke, with a stupid looking mascot and all... Now they have to take it seriously - and Google has to choose between keeping it alive or getting a forever bad rep for killing it - because too many companies rely on it, and they are forced to retrofit useful features on top of the giant pile of garbage they've created.


> No it's not disrespectful, it is entirely true.

Yes, it is disrespectful, and no, it is not entirely true.

If you want to talk literally, a team has no ass to shove things into.

If you want to talk figuratively, making mistakes does not constitute having your "head up your ass." Even making multiple mistakes doesn't warrant that kind of statement. It's rude and it's pointless and doesn't add anything at all except to make you look asinine. So now you're just as asinine as I am with my misunderstandings. Well done.


(I'm going to ignore the personal attack and insult)

> If you want to talk figuratively, making mistakes does not constitute having your "head up your ass." Even making multiple mistakes doesn't warrant that kind of statement.

Making deliberate mistakes multiple times and resisting fixing them for 10 years does qualify as having their "head up their ass" (or asses if that's what you prefer).


I don't know about the origin story, although frankly I hope it's true.

Personally, I think Go is terrible. However, I can't imagine Google is even remotely considering scrapping Go. It is massively popular. It is used all over the place both externally and internally. It's a huge branding asset, community outreach tool, recruiting tool and provides leverage in the form of first party libraries over the direction that software development as a whole is going.


>Saying Go has it's head up it's ass is extremely disrespectful to the people that created it and are maintaining it. You lost all credibility when you chose the low road and said that.

No, because Go really has its head up it's ass.


> Rust has unsafe all over the place

This does not match my experience nor numbers I’ve seen. What leads you to say this?

(Also, other than LLVM, Rust is written in Rust. And when cranelift lands, you could remove that too.)


Apart from memory safety issues, there are type safety issues that can cause equal security harm. Go's approach of casting interface{} back and forth is as dangerous as allowing malloc/free. The worst part is that the language designers don't see this as a problem.


interface{} is not great but it is not void *. Type assertions (casts) on interface{} are typechecked at runtime and failed type assertions on interface{} cannot violate memory safety.


Interface in Go can actually violate memory safety because of object slicing when racing on a variable concurrently from multiple goroutines (unless things have changed lately). It's two words (data and vtable) and so there is a tiny window in which one goroutine could change the data pointer while another goroutine is reading the two words, and so have a type mismatch that could probably cause arbitrary code execution in theory.

I've never heard of this happening in practice, however.


Unless I'm misunderstanding you, you're just talking about a garden variety data race on any interface value in shared memory right? If that's the case, I don't see what it has to do with interface{} in particular (or interfaces at all) as any multiword value in shared memory is susceptible to this, which is the unfortunate price you pay when your language supports shared memory concurrency but doesn't have any allowances for it in its type system.

Of course, data races are usually race conditions too, meaning that even if the memory corruption were prevented, there would probably still be a correctness bug to worry about, so I can understand why the Go authors chose the tradeoffs that they did.

Edit: Reread your comment and I get the part that is specific to interfaces now. This seems harder to exploit than, for example, a read of a slice header being... sliced... resulting in an out of bounds access.


Some languages guarantee memory safety even in the presence of data races (for example Java), while others simply prevent data races in their safe subset (rust). Go is normally memory safe but data races on some critical objects can compromise it.


Their egos have priced them out of a lot of stuff though.

The Chrome team is great but when you're surrounded by people who think they're god damn gods on Earth it's hard to question orthodoxies, especially old ones.


As trollish as that sounds, there's a painful amount of truth there.

Program in GNU Pascal or some such thing? Not us. Pedal to the metal!


It doesn't even have to be memory safe, just less memory accident prone.

D for example would have been an interesting choice (and I think it was usable back then).


D would be interesting but it's also not memory-safe. Not strictly. You can restrict yourself to SafeD, but then you're stuck with a GC you don't really want. Or you can tag all your functions as @Safe, but then it's just a "best practice" and you can't do it everywhere because you can't do things like make system calls.

D has some interesting bits, but it did a lot of head-scratching stuff too. It seemed really confused about what target it was going after, like some middle ground between C#/Java & C/C++ that really doesn't seem to exist.


D was released in 2001, KHTML in 1998, it's like suggesting they should have written vim in java.


... and Chrome in 2008.


Chrome wasn’t written from scratch in 2008. Chrome was a fork of WebKit, which was a fork of Konqueror, which was a fork of KHTML.


KHTML is the rendering engine for the Konqueror browser. Webkit is a fork of KHTML. And Chrome's engine is called Blink (it used to be Webkit, but they forked it).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: