Hacker News new | past | comments | ask | show | jobs | submit login

"I can write a server in C++ that, with careful thought and a bit of planning, will be able to exploit the resources of the kind of 88-core dual-socket machines that are mainstream today."

This is downvoted grey as I write this, and as someone who has been called a "Go shill" on occasion... it's exactly right. Go is a decent language for writing code in a fairly straightforward manner and getting pretty good performance out of it, but if you need to squeeze every bit of performance out of your hardware, it's a bad choice. (I can give you choices that are worse by an order of magnitude, or even more in some cases, but it's still a bad choice.) You have a fairly smooth optimization ride up to 1.5-2x slower than C for most use cases, a few pathological edge cases where it's grossly worse (many clustered around these "every drop of performance" problems!) and a few where it'll reach parity, and then you're going to hit a brick wall.

An 88 core system is probably not impossible to sensibly use with Go, but you're going to be more constrained. I'd imagine it can probably serve web requests really well, but it's more likely to hit pathological cases if you hammer certain global resources.

Arguably, precisely part of the point of Go was that C++ makes you pay for that level of performance in code complexity and cognitive overhead all the time, even when you don't remotely need it. (I can't prove this, but I'd guess the median "cloud service" is grotesquely overprovisioned on the smallest AWS instance. The "cloud services" that leap to mind are things like the AWS auth servers or Netflix content servers or the Google crawling or indexing servers, but while those are huge and important, they're also in many dimensions the exceptions. A good chunk of the popularity of "serverless" is probably a result of this.) When you do need every bit of performance, though, the list of viable options is short.




As a fellow gopher, I agree, Go isn't trying to compete with C/C++ on performance, it's trying to give you safety and ease of use of high level language.

I think the point about concurrency typifies this, channels aren't as fast as mutexs and semaphores, but it makes sharing code with co workers easier to reason about.

If you're in a domain where performance is still king, Go isn't trying to find a place there.


Presumably he was downvoted for responding to “concurrency in Go is easier” with “Disagree. I can get every bit of performance out of an 88 core CPU with C++”. It’s not a coherent counter argument; no one claimed Go was as efficient as C++, only that concurrency is easier.


How would Rust compare?


The benchmark game puts c, rust, and c++, in that order, roughly on par in performance, with go being about 2-3x slower. No idea if that's accurate. Sampling bias means people who like performance are optimizing the languages used for performance, and people who just like to get something working quit after the first benchmark in go or python is finished. https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


If you discard regexredux, then Rust is faster than C and C++ at average: see average bar at "How many times slower graph" [1].

regexredux program is outlier in Rust, because replacement of a regex in string is slower in regex crate, because author of regex crate chose to implement safer, but slower algorithm. To fix this, regex crate must be updated or replaced. I spent two weekends on this.

[1]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> To fix this, regex crate must be updated or replaced.

I somehow doubt pcre2 is being changed to make the tiny toy C programs run better.

> I spent two weekends on this.

So shouldn't we assume the program performance simply reflects all-those-hours you've spent working on it?


Look at the program:

  fn find_replaced_sequence_length(sequence: String) -> usize {
    // Replace the following patterns, one at a time:
    let substs = vec![
        ("tHa[Nt]", "<4>"),
        ("aND|caN|Ha[DS]|WaS", "<3>"),
        ("a[NSt]|BY", "<2>"),
        ("<[^>]*>", "|"),
        ("\\|[^|][^|]*\\|", "-"),
    ];
  
    // Perform the replacements in the sequence:
    substs
        .iter()
        .fold(sequence, |s, (re, replacement)| {
            regex(re)
                .replace_all(&s, NoExpand(replacement)).into_owned()
        }).len()
  }

It measures performance of RE engine. I can switch from regex crate to PCRE2, and program performance will match C.


> I can switch from regex crate to PCRE2, and program performance will match C.

Perhaps it would; those measurements have not been made.

What does that have to do with re-writing libraries to make tiny toy programs run better?

What does that have to do with program performance being a proxy for programmer effort?


mandelbrot is outlier in Rust, because… :-)


Because hardware acceleration is used in C and C++ versions. I will fix this soon.


> Sampling bias means people who like performance are optimizing the languages used for performance …

Also, there might be a something to prove "bias" :-)


Also, Dropbox rewrote the core stuff from go to rust when they needed more performance. So that is one example/anecdote.


Concurrency in Rust is extremely easy, because it safe by design. Just import rayon crate and change iter() to par_iter() [1]. Compiler will point out to problems, e.g. it will not allow to send a type, which cannot be used concurrently, until it will be wrapped by Arc (atomic reference counter).

[1]: https://docs.rs/rayon/1.0.3/rayon/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: