Hacker News new | past | comments | ask | show | jobs | submit login
Why Discord is switching from Go to Rust (discordapp.com)
1582 points by Sikul 24 days ago | hide | past | web | favorite | 640 comments



Looks like the big challenge is managing a large, LRU cache, which tends to be a difficult problem for GC runtimes. I bet the JVM, with its myriad tunable GC algorithms, would perform better, especially Shenandoah and, of course, the Azul C4.

The JVM world tends to solve this problem by using off-heap caches. See Apache Ignite [0] or Ehcache [1].

I can't speak for how their Rust cache manages memory, but the thing to be careful of in non-GC runtimes (especially non-copying GC) is memory fragmentation.

Its worth mentioning that the Dgraph folks wrote a better Go cache [2] once they hit the limits of the usual Go caches.

From a purely architectural perspective, I would try to put cacheable material in something like memcache or redis, or one of the many distributed caches out there. But it might not be an option.

It's worth mentioning that Apache Cassandra itself uses an off-heap cache.

[0]: https://ignite.apache.org/arch/durablememory.html [1]: https://www.ehcache.org/documentation/2.8/get-started/storag... [2]: https://blog.dgraph.io/post/introducing-ristretto-high-perf-...


One the one hand, yes. On the other hand, all of this sounds much more complex and fragile. This seems like an important point to me:

"Remarkably, we had only put very basic thought into optimization as the Rust version was written. Even with just basic optimization, Rust was able to outperform the hyper hand-tuned Go version."


I found similarly when I ported an image resizing algorithm from Swift to Rust: I'm experienced in swift thus was able to write in an idiomatic way, and have little Rust experience thus I wrote it in a naive way; yet still the rust algorithm was twice(!) as fast. And swift doesn't even have a GC slowing things down!


> Swift doesn't have a GC slowing things down

This Apple marketing meme needs to die. Reference counting incurs arguably more cost than GC, or at least the cost is spread through-out processing.


It incurs some cost, but whether it is higher is very debatable. This is very much workload dependent. A smart compiler can elide most reference updates.


No it can't, not in Swift's current implementation.

https://github.com/ixy-languages/ixy-languages


It would seem that the Swift compiler is far from smart[1].

[1]: <https://media.ccc.de/v/35c3-9670-safe_and_secure_drivers_in_...


Apple's ARC is not a GC in the classic sense. It doesn't stop the world and mark/sweep all of active memory. It's got "retain" and "release" calls automatically inserted and elided by the compiler to track reference counts at runtime, and when they hit zero, invoke a destructor. That's not even close to what most people think of when they think "gc". Of course it's not free, but it's deterministic.


I think you are being borderline pedantic here. ARC _is_ GC in the classic sense: https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...

I agree with you that most people tend to associate GC with something more advanced nowadays, like mark and sweep as you said in another comment, but it seems pointless to argue that ARC is not a form of GC.


You're using a hearsay, folklore definition of GC, not a CS one.

Refcounting in the presence of threads is usually non-deterministic too.


It is non deterministic, but at a much different scale.


Reference counting is a (low-throughput, low-latency) form of garbage collection.


Yes and no. From a theoretical perspective, I suppose that's true, but "garbage collection" tends to mean a non-deterministic collector that does its own thing, and you don't have to think at all about memory. That does not apply to Swift, as of course, you need to understand the difference between strong and weak references. It's unfairly simplistic to couple the two.


No, RC is GC.

Most people think of Python as GCed language, and it uses mostly RC.

Any runtime that uses mark & sweep today may elect to use RC for some subset of the heap at some point in a future design, if that makes more sense. The mix of marking GC vs refcounting GC shouldn't affect the semantics of the program.


But it actually is just garbage collection.


It is but in the context of this discussion it's very clear that they meant a tracing garbage collector, which has a very different cost than atomic reference counting. Or to put it another way: you're technically correct, the worst kind of correct.


The low-latency part might not even be true. RC means that you don't have CPU consuming heap scans, but if you free the last reference to a large tree of objects, freeing them can take quite a lot of time, causing high latencies.


ARC, used by Swift, has its own cost.


True, but it's generally better than most full GC solutions (for processes running for relatively short times without the benefit of profile-guided optimization), and worse than languages with fully statically analyzable memory usage.

Note: that parenthetical is a very big caveat, because properly profile-optimized JVM executables can often achieve exceptional performance/development cost tradeoffs.

In addition however, ARC admits a substantial amount of memory-usage optimization given bytecode, which is now what developers provide to Apple on iOS. Not to mention potential optimizations by allowing Apple to serve last-minute compiled microarchitecture optimized binaries for each device (family).

To satiate the pedants... ARC is more of less GC where calls into the GC mechanism are compiled in statically and where there are at worst deterministic bounds on potential "stop the world" conditions.

While this may not be presently optimal because profile-guided approaches can deliver better performance by tuning allocation pool and collection time parameters, it's arguably a more consistent and statically analyzable approach that with improvement in compilers may yield better overall performance. It also provides tight bounds on "stop the world" situations, which also exist far less frequently on mobile platforms than in long running sever applications.

Beyond those theoretical bounds, it's certainly much easier to handle when you have an OS that is loading and unloading applications according to some policy. This is extremely relevant as most sensible apps are not actually long running.


> but it's generally better than most full GC solutions

I doubt that. It implies huge costs without giving any benefits of GC.

A typical GC have compaction, nearly stack-like fast allocation [1], ability to allocate a bunch of objects at once (just bump the heap pointer once for a whole bunch).

And both Perl and Swift do indeed perform abysmally, usually worse than both GC and manual languages [2].

> ARC is more of less GC

Well, no. A typical contemporary GC is generational, often concurrent, allowing fast allocation. ARC is just a primitive allocator with ref/deref attached.

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49....

[2] https://github.com/ixy-languages/ixy-languages


It is nowhere near stack-like. Stack is hot in cache. Heap memory in tlab is cold. Bringing the lines into cache is the major cost, not bumping the pointer.


> Stack is hot in cache. Heap memory in tlab is cold.

What? This doesn't make any sense. From the cache's POV stack and bump-allocated heap are the same thing. Both are continuous chunks of memory where the next value is being allocated right after the previous one.

The only difference between the stack and the bump-allocated heap is that the former has hardware support for pointer bumping and the latter has not. That's all.


You're missing the fact that the tlab pointer is only ever moved forward, so it always points to recently unused memory. Until the reset happens and it points back to the same memory again, the application managed to allocate several megabytes or sometimes hundreds of megabytes, and most of that new-gen memory does not fit even in L3 cache.

The stack pointer moves both directions and the total range of that back-and-forth movement is typically in kilobytes, so it may fit fully in L1.

Just check with perf what happens when you iterate over an array of 100 MB several times and compare that to iterating over 10 kB several times. Both are contiguous but the performance difference is pretty dramatic.

Besides that, there is also an effect that the faster you allocate, the faster you run out of new gen space, and the faster you trigger minor collections. These are not free. The faster you do minor collections, the more likely it is for the objects to survive. And the cost is proportional to survival rate. That's why many Java apps tend to use pretty big new generation size, hoping that before collection happens, most of young objects die.

This is not just theory - I saw this just too many times, when reducing allocation rate to nearly zero caused significant speedups - by order of magnitude of more. Reducing memory traffic is also essential to get good multicore scaling. It doesn't matter each core has a separate tlab, when their total allocation rate is so high that they are saturating LLC - main memory link. It is easy to miss this problem by classic method profiling, because a program with such problem will manifest by just everything being magically slow, but no obvious bottleneck.


> You're missing the fact that the tlab pointer is only ever moved forward, so it always points to recently unused memory. Until the reset happens and it points back to the same memory again, the application managed to allocate several megabytes or sometimes hundreds of megabytes, and most of that new-gen memory does not fit even in L3 cache.

Yes, you are right about stack locality. It indeed moves back and forward making effective used memory region quite small.

> These are not free. The faster you do minor collections, the more likely it is for the objects to survive. And the cost is proportional to survival rate.

Yes, that's true. Immutable languages are doing way better here having small minor heaps (OCaml has 2MB on amd64) and very small survival rates (with many object being directly allocated on older heap if they are known to be lasting in advance).

Now I understand your point better and I agree.


Swift's form of GC is one of the slowest one, no wonder porting to Rust made it faster, specially given that most tracing GC outperform Swift's current implementation.

If one goes with reference counting as GC implementation, then one should take the effort to use hazardous pointers and related optimizations.


A C app will tend to outperform a Java or Golang app by 3x, so it isn't too surprising.


Could you please provide a source for this?

Java is very fast and 3X slower is a pretty wild claim.


Those people have a really good claim to have the most optimized choice on each language. They've found Java to be 2 to 3 times slower than C and Rust (with much slower outliers).

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

On the real world, you won't get things as optimized in higher level languages, because optimized code looks completely unidiomatic. A 3x speedup from Java is a pretty normal claim.


Speaking of, I wish there were an "idiomatic code benchmarks game". Some of us want to compare language speed for common use cases vs trying to squeeze every last piece of performance from it.



D, Nim and Crystal all do very well on all metrics. Author finds Rust pretty close but not as maintainable. Interesting that the top 3 (performance close to C++, but more maintainable) all are niche languages that haven't really broken into the mainstream.


I really wish Intel or MS or someone would fund D so it could give Go and Rust a run for their money. It's as fast (or faster), expressive, powerful and, subjectively, easier to pick up and code in than Rust. It just needs some backers with muscle.


You probably have Swift in the same niche... and more elegant, not completely ignoring the last few decades of language research etc. If you want something more minimalistic there's Zig. D is just "C++ + hindsight", nothing special, only extra fragmentation of dev ecosystem by bringing in another language.

Ofc, Apple is not MS, so Swift developer experience and docs kind of suck (if you're not in the "iOS bubble" where it's nice and friendly), and multiplatform dev experience especially sucks even worse...

And "easier to pick up and code in" might not necessarily be an advantage for a systems language - better start slowly putting the puzzle together in your head, and start banging up code others need to see only after you've internalized the language and its advanced features and when (not) to use them! It helps everyone in the long run. This is one reason why I'd bet on Rust!


"completely ignoring the last few decades"

Well, for fairness, D is quite a bit older than Swift. (It's nearly as much older than Swift as it is younger than C++!) But what do you think pushes Swift out of the "C++ with hindsight" basket?


Maybe some big FAANG company. Start at the beginning of the acronym, I guess. I wonder if anyone could persuade anyone at Facebook to do a little bit of D professionally. I bet if even one serious Facebook engineer made a serious effort to use D, its adoption woes would be over.


Facebook is unlikely to move to D at this point and already has multiple serious projects in Rust (that they’re happy about, AFAIK).


Walter Bright the author of D worked at Facebook writing a fast preprocessor for C/C++ in D.


Alexandrescu, too.

They also talked about it pretty frequently.

Maybe ggp's claim that D just needs a heavy hitter backing it might be misplaced.


I don't think it's misplaced. Facebook wasn't really backing D by hiring them. And it doesn't seem like that specific project is active anymore.


Walter too? AFAIK it was just Andrei.


I thought Facebook used to “unofficially” support D? Or am I mixing languages up?

Right now, I think eBay is one of the big companies that uses D.

Edit: Sibling comment beat me to it.


That is what MS is doing with C#/F# and .NET Native/Xamarin AOT/CoreRT, with the experience taken from Midori.

So I doubt they would sponsor D.


C# is an attempt of making Java good, F# is an attempt of making a subset of Haskell popular. .Net Native/Xamarin/CoreRT are UI frameworks. There is nothing there that would compete with C++.

I don't think MS has any interest in improving C++ (look at their compiler). But that's not because of competing activities.


Except the lessons learned from Midori and .NET Native on UWP.

Visual C++ is the best commercial implementation of C++ compilers, better not be caught using xlc, aCC, TI, IAR, icc and plenty of other commercial offerings.

If C++ has span, span_view, modules, co-routines, core guidelines, lifetime profile static analyser, is it in large part to work started and heavily contributed by Microsoft on ISO, and their collaboration with Google.

As for competing with C++, it is quite clear, specially when comparing the actual software development landscape with the 90's, that C++ has lost the app development war.

Nowadays across all major consumer OSes it has been relegated to the drivers and low level OS services layer like visual compositor, graphics engine, GPGPU binding libraries.

Ironically, from those mainstream OSes, Microsoft is the only one that still cares to provide two UI frameworks directly callable from C++.

Which most Windows devs end up ignoring in favour of the .NET bindings, as Kenny Kerr mentions in one of his talks/blog posts.

Back to the D issue, Azure IoT makes use of C# and Rust, and there is Verona at MSR as well, so as much I would like to see them spend some effort on D, I don't see it happening.


CoreRT is an UI frameworks, what ?


I strongly believe, that people should consider the ecosystem part of maintainability risk and claim

Rust have by far the better ecosystem compared to those 3


That’s highly subjective claim to bare without evidence. All three languages are actively maintained and are growing.

My refute it simply: Rusts web development story isn’t out of the box clean like Crystal Lang’s which ships with an HTML language out of the box. So it could be categorized as a poor choice in comparison to Crystal


> ships with an HTML language

Did you mean HTTP server? If so, there are at least 3 good ones in Rust that are only a `cargo add` away. If you've already taken the trouble to set up a toolchain for a new language, surely a single line isn't asking too much.


Rust won't get a maintainability prize for short programs with simple algorithms.

It's Haskell-like features only help on large and complex programs.


They do help library writers, and therefore turn large and complex programs into shorter programs.


— Expressiveness "keep in mind that this is a subjective metric based on the author's experience!"

— Maintenance Complexity "keep in mind that this is a subjective metric based on the author's experience!"


>> I wish there were an "idiomatic code benchmarks game"

Help make one:

— contribute programs that you consider to be "idiomatic"

https://salsa.debian.org/benchmarksgame-team/benchmarksgame/...

— use those measurement and web scripts to publish your own "common use cases" "idiomatic code" benchmarks game

----

>> compare language speed for common use cases vs trying to squeeze every last piece of performance

— please be clear about why you wish to compare the speed of programs written as if speed did not matter

----

??? What do you think is not "idiomatic" about programs such as:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


There is a lot more than the language/compiler what influences the results, but at least these benchmarks are closer to real world than solving math puzzles in micro benchmarks.

https://www.techempower.com/benchmarks/


The problem would be to get a definition of idiomatic code.

Would it be fair to accept an optimization on a language and refuse it on another because it is not idiomatic ?


VMs with JIT like the JVM are only ever really fast/competitive with C in small numerical micro-benchmarks where the code can be hyper-optimized.

Most code will be considerably slower due to a lot of factors.

Java in particular is a very pointer-heavy language, made up of pointers to pointers to pointers everywhere, which is really bad for our modern systems that often are much more memory latency than CPU constrained.

A factor of 2-4x to languages like C++ or Rust for most code seems plausible (and even low) unless the limiting factor is external, like network or file system IO.


This stuff is really hard to pin down though. I've been reading these sorts of debates forever.

It's true that pointer chasing really hurts in some sorts of program and benchmark. For sure. No argument. That's why Project Valhalla exists.

But it's also my view that modern C++ programming gets away with a lot of slow behaviours that people don't really investigate or talk about because they're smeared over the program and thus don't show up in profilers, whereas actually the JVM fixes them everywhere.

C++ programs tend to rely much more heavily on copying large structures around than pointer-heavy programs. This isn't always or even mostly because "value types are fast". It's usually because C++ doesn't have good memory management so resource management and memory layout gets conflated, e.g. std::vector<BigObject>. You can't measure this because the overheads are spread out over the entire program and inlined everywhere, so don't really show up in profiling. For the same reasons C++ programs rely heavily on over-specialised generics where the specialisation isn't actually a perf win but rather a side effect of the desire for automatic resource management, which leads to notorious problems with code bloat and (especially) compile time bloat.

Another source of normally obscured C++ performance issues is the heap. We know malloc is very slow because people so frequently roll their own allocators that the STL supports this behaviour out of the box. But malloc/new is also completely endemic all over C++ codebases. Custom allocators are rare and restricted to very hot paths in very well optimised programs. On the JVM allocation is always so fast it's nearly free, and if you're not actually saturating every core on the machine 100% of the time, allocation effectively is free because all the work is pushed to the spare cores doing GC.

Yet another source of problems is cases where the C++ programmer doesn't or can't actually ensure all data is laid out in memory together because the needed layouts are dynamically changing. In this case a moving GC like in the JVM can yield big cache hit rate wins because the GC will move objects that refer to each other together, even if they were allocated far apart in time. This effect is measurable in modern JVMs where the GC can be disabled:

https://shipilev.net/jvm/anatomy-quarks/11-moving-gc-localit...

And finally some styles of C++ program involve a lot of virtual methods that aren't always used, because e.g. there is a base class that has multiple implementations but in any given run of the program only one base class is used (unit tests vs prod, selected by command line flag etc). JVM can devirtualise these calls and make them free, but C++ compilers usually don't.

On the other hand all these things can be obscured by the fact that C++ these days tends only to be used in codebases where performance is considered important, so C++ devs write performance tuned code by default (or what they think is tuned at least). Whereas higher level languages get used for every kind of program, including the common kind where performance isn't that big of a deal.


The costs of languages like C++ also get worse the older a program is.

Without global knowledge of memory lifetimes, maintainers make local decisions to copy rather than share.


> We know malloc is very slow because people so frequently roll their own allocators that the STL supports this behaviour out of the box. But malloc/new is also completely endemic all over C++ codebases. Custom allocators are rare and restricted to very hot paths in very well optimised programs. On the JVM allocation is always so fast it's nearly free, and if you're not actually saturating every core on the machine 100% of the time, allocation effectively is free because all the work is pushed to the spare cores doing GC.

Allocation in a C++ program is going to be about the same speed as in a Java program. Modern mallocs are doing basically the same thing on the hot-path: bumping the index on a local slab allocator.


3x might be a bit too much today, but it's definitely slower than C. Also to be considered is the VM overhead, not just the executed code.

Here are some benchmarks; I'll leave to the experts out there to confirm or dismiss them.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> 3x might be a bit too much today, but it's definitely slower than C.

If anything the gap is increasing not shrinking. JVM is terrible at memory access patterns due to the design of the language, and designing for memory is increasingly critical for maximum performance on modern systems. All the clever JIT'ing in the world can't save you from the constant pointer chasing, poor cache locality, and poor prefetching.

The gap won't shrink until Java has value types. Which is on the roadmap, yes, but still doesn't exist just yet.

The problem with those benchmarks is if you look at the Java code you'll see it's highly non-idiomatic. Almost no classes or allocations. They almost all exclusively use primitives and raw arrays. Even then it still doesn't match the performance on average of the C (or similar) versions, but if you add the real-world structure you'd find in any substantial project that performance drops off.


> Almost no classes or allocations.

Plenty of `inner static classes`. Where's the up-to-date comparison showing that `static` makes a performance difference for those tiny programs?

Plenty of TreeNode allocations.

----

> The problem with …

The problem with saying "in any substantial project that performance drops off" is when we don't show any evidence.


When you say "value types" in this context, you mean "copy-only types"?


I mean "value types" https://www.jesperdj.com/2015/10/04/project-valhalla-value-t... & http://cr.openjdk.java.net/~jrose/values/values-0.html

I don't know what you mean by "copy-only types." I'm not finding any reference to that terminology in the context of language design.


Ah, thanks for the link; I wasn't sure what it meant in the context of Java, since it's possible to get value semantics using a class.

Sorry about the confusion. I meant for the quotes around "copy-only" to indicate that it wasn't really a standard term, but I marked "value types" the same way, so that didn't really work. By "copy-only" I meant something you couldn't have more than one reference to: every name (variable) to which you assign the data would have its own independent copy.


> By "copy-only" I meant something you couldn't have more than one reference to: every name (variable) to which you assign the data would have its own independent copy.

That's not really a requirement of value types, no. C# has value types (structs) and you can have references to them as well (ref & out params).

In general though yes it would be typically copied around, same as an 'int' is. Particularly if Java doesn't add something equivalent to ref/out params.


I know, that's why I asked for clarification originally. :) Anyways, I appreciate the details.


However the memory usage difference is astonishing for some of those benchmarks - using 1000x more memory is only acceptable for some situations.


This may be overly cynical, but don’t you have it backwards? 1000x greater memory usage is acceptable for most applications.

There are only a few applications for which it isn’t acceptable.

Just look at all the massive apps built on Electron. People wouldn’t do that if it wasn’t effective.


Wouldn't you say there's a difference between "effective" and "getting away with it"? If non-technical users see that their daily computing lives are made more complicated (because of lowered performance) by having n Electron apps running at the same time, they may not understand the reasons, but they will certainly choose a different solution that has the features they need, where available.


Wouldn't you say there's a difference between "effective" and "getting away with it"?

I used to think that but now I’m not so sure. :(


— default JVM allocation ~35 MB looks astonishing for tiny programs that don't allocate memory

— memory usage is less different for tiny programs that do allocate memory: reverse-complement, k-nucleotide, binary-trees, mandelbrot, regex-redux


Agreed, and ironically the most widely used Java platform (Android), despite its VM optimizations, is the one which would benefit the most from running only native code.

I mean, those 1GB RAM 7 years old slow as molasses phones getting dust into drawers or being littered into landfills would scream if they didn't have to run everything behind a VM.


Make no mistake — Android isn't memory hungry because of Java alone. Android 4 used to comfortably fit in 1Gb of RAM, but Android 10 no longer can run on such devices. Both old and new versions use JVM, but newer Android has a lot of "helpful" system services, such as "Usage Statistics" service [1] and "Autofill Servide" [2].

Google's spyware is becoming more invasive and thus more memory-hungry.

1: https://developer.android.com/reference/android/app/usage/Us...

2: https://developer.android.com/reference/android/service/auto...


Really depends on the domain. There's some things that are a lot easier to make scale up in a language with a great concurrent gc, because that makes writing some lock free data structures quite fundamentally easier (no complicated memory reclamation logic, trivial to avoid ABA problems).


GC makes it easier to write, but not necessarily better. Modern state-of-the-art Java GCs operate a global heap, so you often pay for what you do not use. In languages like Rust or C++ your can build small locally GCed heaps, where you can limit GC overhead to just a few particular structures that need GC, not everything. Therefore other structures like caches don't affect GC pause times.

And the "hardness" of writing lockless structures is strongly offset by libraries, so unless you're doing something very exotic, it is rarely a real problem.


“GC makes it easier to write.” I disagree. GC means I don’t get deterministic destruction which means I can’t use RAII properly.


The post was about writing lockless structures. With lockless RAII is of no use. See ABA problem. RAII is awesome in other places.


It depends greatly on the problem domain. The difference might be near zero, or you might be able to get ~16x better performance (using say, AVX-512 intrinsics). Then again, is intrinsics really C? Not really, but you can do it. What if you have to abandon using classes when you want to, in order to get the memory layout you want in Java, are you still using Java?


This statement is definitely wrong in this generic blank fashion. Also I would lay upon you the burden of proof for it :).

Tight, low level code in Java and Go is roughly as fast as average C code. The Go compiler is know to be less good at optimizing code than e.g. GCC, but this in many cases creates little practical difference, while the Java JIT compilers have become excellent to a point where they often beat GCC, especially as they can use run time profiling for code optimization. So they can optimize the code for the actual task at hand.

Where the languages differ in "speed" is their runtime environment. Java and Go are languages with garbage collection, which of course means that some amount of CPU is required to perform GC. But as the modern garbage collectors run in parallel with the program, this CPU effort often enough is no bottleneck. On the other side, manual memory management has different performance trade-offs, which in many cases can make it quite slow on its own.


This really depends on what job a program does and how well the C program is implemented.


I call utter bullshit especially when dealing with threads. I think you'll spend so much time debugging pointers, the stack and your memory allocations that switching to a more modern language could save you significant debugging time.

But now I sound like a Geico (insurance) commercial. Sorry about that.


> The JVM world tends to solve this problem by using off-heap caches. See Apache Ignite [0] or Ehcache [1].

For those who care, I was interested how off-heap caching works in Java and I did some quick searching around the Apache Ignite code.

The meat is here:

- GridUnsafeMemory, an implementation of access to entries allocated off-heap. This appears to implement some common Ignite interface, and invokes calls to a “GridUnsafe” class https://github.com/apache/ignite/blob/53e47e9191d717b3eec495...

- This class is the closest to the JVM’s native memory, and wraps sun.misc.Unsafe: https://github.com/apache/ignite/blob/53e47e9191d717b3eec495...

- And this, sun.misc.Unsafe, is what it’s all about: http://www.docjar.com/docs/api/sun/misc/Unsafe.html

It’s very interesting because I did my fair share of JNI work, and context switches between JVM and native code are typically fairly expensive. My guess is that this class was likely one of the reasons why Sun ended up implementing their (undocumented) JavaCritical* etc functions and the likes.


Unsafe lets you manipulate memory without any JNI overhead other than when allocating or de-allocating memory, and that is usually done in larger chunks and pooled to avoid the overhead at steady state. Netty also takes advantage of Unsafe to move a lot of memory operations off the java heap.

Unsafe was one of the cooler aspects to Java that Oracle is actively killing for, well, no good reason at least.


They aren't killing it. They're steadily designing safe and API stable replacements for its features, with equal performance. That is a very impressive engineering feat!

For instance fast access to un-GCd off heap memory is being added at the moment via the MemoryLayout class. Once that's here apps that upgrade won't need to use Unsafe anymore. MemoryLayout gives equivalent performance but with bounds checked accesses, so you can't accidentally corrupt the heap and crash the JVM.

They've been at it for a long time now. For instance VarHandle exposes various low level tools like different kinds of memory barriers that are needed to implement low level concurrency constructs. They're working on replacements for some of the anonymous class stuff too.


> Unsafe was one of the cooler aspects to Java that Oracle is actively killing for, well, no good reason at least.

I mean, there's the obvious reason that it breaks the memory safety aspect that Java in general guarantees. The whole point of the feature is to subvert the language & expectations.

I'm not saying they should remove it, but it's pretty hard to argue there's "no good reason" to kill it, either. It is, after all, taking the worst parts of C and ramming it into a language that is otherwise immune from that entire class of problems.


C# seems to have a neat middle ground for this kind of stuff with their Span<T> api.


True, but we had our own version of unsafe for a much longer time. MS was just pragmatic enough to allow it across the ecosystem.

I'm guessing at least some of that was a side effect of wanting to support C++; not having pointers as an option would have killed C++/CLI from the get go.


Unsafe is being replaced by less error prone APIs, not killed.

Project Panama is what is driving that effort.


> context switches between JVM and native code are typically fairly expensive

Aren't these Unsafe memory read and write methods intrinsified by any serious compiler? I don't believe they're using JNI or doing any kind of managed/native transition, except in the interpreter. They turn into the same memory read and write operations in the compiler's intermediate representation as Java field read and writes do.


They are optimized, yes, but from what I recall from reading the JVM code a few years ago, some optimizations don't get applied to those reads/writes. For example, summing two arrays together will be vectorized to use SSE instructions while doing so through Unsafe won't [0].

[0] https://cr.openjdk.java.net/~vlivanov/talks/2017_Vectorizati...


The idea is that that call is still less expensive than going over the wire and MUCH less expensive than having the GC go through that heap now and then.


Yes sorry I should have elaborated, those Critical JNI calls avoid locking the GC and in general are much more lightweight. This is available for normal JNI devs as well, its just not documented. They were primarily intended for some internal things that Sun needed.

I’m now guessing that this might actually have been those Unsafe classes as an intended use case. It makes total sense and I can see how that will be very fast.


> I can't speak for how their Rust cache manages memory, but the thing to be careful of in non-GC runtimes (especially non-copying GC) is memory fragmentation.

As far as I know, a mark-and-sweep collector like Go's doesn't have any advantage over malloc/free when it comes to memory fragmentation. Am I missing some way in which Go's GC helps with fragmentation?


Go GC implementation uses memory allocator that was based on TCMalloc (but derived from it quite a bit). They use a free list of multiple fixed allocatable size-classes, which helps in reducing fragmentation. That's why Go GC is non-copying.


I’m not sure I follow. GC implementations that don’t copy (relocate) are inherently subject to the performance cost of “fragmentation” (in the sense of scattering memory accesses over non-adjacent regions). This is a very high price to pay when you’re dealing with modern hardware.


Allocator underneath is keeping track of freed memory, so next allocation has high chance of being squeezed into memory region that has been used before. It's obviously not as good as say GC that relocates after sweep, but at least it doesn't leave gaping holes.


Indeed, but it also doesn’t maintain locality of access nearly as well for young objects (the most commonly manipulated ones) and even older ones that survive.


one related point: the article mentions utilizing rust's BTreeMap, which manages its heap allocations with cache efficiency in mind: https://doc.rust-lang.org/std/collections/struct.BTreeMap.ht....

The guts of BTreeMap's memory management code is here: https://github.com/rust-lang/rust/blob/master/src/liballoc/c.... (warning: it is some of the most gnarly rust code I've ever come across, very dense, complex, and heavy on raw pointers. this is not a criticism at all, just in terms of readability). Anecdotally I've had very good results using BTreeMap in my own projects.

In terms of how the "global" allocator impacts performance, I'd expect it to play a bigger role in terms of Strings (I mean, it's a chat program), and possibly in terms of how the async code "desugars" in storing futures and callbacks on the heap (just guessing, I'm not an expert on the rust async internals).


I was a bit surprised that BTreeMap turned out to be more memory-efficient than HashMap; can anyone shed some light on this?

One of these days I’ll get around to turning my Rust set implementation[0] into a full-blown map (it’s already <50% the size of BTreeSet for ints)...

[0] https://github.com/senderista/rotated-array-set


In the current context, fragmentation refers more to the problem of consuming extra memory through fragmentation, which malloc implementations like the one Go (or Rust, or glibc) uses can often mitigate.


Sure, but the malloc implementation in Rust probably does something similar.

What I wanted to understand is what is the difference in fragmentation between a non-copying, non-compacting GC and a non-GC runtime.


> The JVM world tends to solve this problem by using off-heap caches. See Apache Ignite [0] or Ehcache [1].

Yeah, but I really do not bite your argument.

When you are reduced to do manual memory management and fight the GC of your language, maybe you should simply not use a language with GC in the first place.

They are right to use rust ( or C/C++) for that. It's not for nothing that redis (C) is so successful in the LRU domain.

> It's worth mentioning that Apache Cassandra itself uses an off-heap cache.

And still ScyllaDB (C++) is able to completely destroy Cassandra in term of AVG latency [0]

[0]: https://www.scylladb.com/product/benchmarks/


Maybe I've missed this, but why do they need a particularly large LRU cache? Surely this isn't all one process, so presumably they could reduce spikes by splitting the same load across yet more processes?


Larger cache = faster performance and less load on the database.

I only glossed over the article but the problem they had with Go seems to be the GC incurred from having a large cache. Their cache eviction algorithm was efficient, but every 2 minutes there was a GC run which slowed things down. Re-implementing this algorithm in Rust gave them better performance because the memory was freed right after the cache eviction.

Splitting it across more processes will result in more cache misses and more DB calls.


I am of course talking about the same amount of total cache RAM, just split among more processes. Depending on distribution of the calls, you might get more cache misses, but I don't think it's guaranteed, and if it is, I don't think we can assume it's significant. Heck, you could even use more cache RAM; the cost of a total rewrite plus ongoing maintenance in a new language covers a fair bit of hardware these days.


Great comment and thanks for the reading material.

Now I'm wondering if there's a Rust library for a generational copying arena--one that compacts strings/blobs over time.


Generational arenas yes, but copying, I'm not aware of one. It's very hard to get the semantics correct, since you can't auto-re-write pointers/indices.


Perhaps such a library could help you record the location of the variables that contain pointers to the strings and keep that pointer up to data as the ownership of the string moves from variable to variable?

I'm other words, doing some of the work a moving compacting collector would do during compaction but continuously during normal program execution.


There's no way to hook into the move, so I don't see how it would be possible, or at least, not with techniques similar to compacting GCs.


Maybe by reifying the indirection? The compacting arena would hand out smart pointers which would either always bounce through something (to get from an indentity to the actual memory location, at a cost) or it'd keep track and patch the pointers it handed out somehow.

Possibly half and half, I don't remember what language it was (possibly obj-c?) which would hand out pointers, and on needing to move the allocations it'd transform the existing site into a "redirection table". Accessing pointers would check if they were being redirected, and update themselves to the new location if necessary.

edit: might have been the global refcount table? Not sure.


Yeah so I was vaguely wondering about some sort of double indirection; the structure keeps track of "this is a pointer I've handed out", those pointers point into that, which then points into the main structure.

I have no idea if this actually a good idea, seems like you get rid of a lot of the cache locality advantages.


This sounds a lot like classic MacOS (pre-Darwin) memory allocation. You were allocated a handle, which you called Lock on to get a real pointer. After accessing the memory, you called Unlock to release it. There was definitely a performance hit for that indirection.


It's the same on classic Windows (pre-Windows 95) memory allocation. GlobalAlloc with GMEM_MOVEABLE or LocalAlloc with LMEM_MOVEABLE returned a handle, which you called GlobalLock or LocalLock on to get a real pointer. After accessing the memory, you called GlobalUnlock or LocalUnlock to release it. Of course, this being Microsoft, you can still call these functions inherited from 16-bit Windows even on today's 64-bit Windows. (See Raymond Chen's "A history of GlobalLock" at https://devblogs.microsoft.com/oldnewthing/20041104-00/?p=37...).


I don't know that the cache locality would be a big issue (your indirection table would be a small-ish array), however you'd eat the cost of doubling the indirections, each pointer access would be two of them.


On top of the cost of the extra pointer lookup, you also run into cache coherency issues when dealing with threading. So then you need to use atomic ops or locks or cache flushing which makes it even more expensive.

Rust is better suited to deal with it since there's a similar issue with refcounting across threads, so you might be able to get away with doing it for objects that are exclusive to one thread.


I would give out handles and have a Guard Object, that allows you to get smart pointers from handles as long as Guard Object is in scope. Then when Guard Object is out of scope, the smart pointers would get invalidated.


It would be possible (and not that hard) for Copy types, but you'd need some custom derive traits if you had any pointers to managed objects.


> From a purely architectural perspective, I would try to put cacheable material in something like memcache or redis

You cannot use a caching server at that scale with those latency requirements. It has to be embedded


Something like rocksdb (https://rocksdb.org/) then


You cannot use that either, due to I/O block device saturation, even on enterprise PCIe NVMe. Not enough parallel IOPS. Only RAM can be used.


> From a purely architectural perspective, I would try to put cacheable material in something like memcache or redis, or one of the many distributed caches out there. But it might not be an option.

Can you speak to why using something like memcache or redis may not be an option?


For latency-sensitive services, having to traverse the network to access a shared cache may be too slow. To use the current story as an example, you'd be trading off an occasional 100-millisecond latency spike every 2 minutes for an added 1-2ms of latency for every request.


Don't know enough about Rust, but I think Go would benefit immensely by allowing its users to disable GC and allow de-allocating memory by hand. GC is great for simpler applications, but more complex projects end up fighting so much with memory and GC in Go that all the benefits of automatic de-allocations are negated. Love every other aspect of Go.


Wow. We literally just published why to not put a cache in front of your server to mask its bad performance behind a layer of complexity. tl;dr: make sure you have a solid DB to begin with. (Forgive the gated asset, but it's a good read!)

https://go.scylladb.com/7-reasons-no-external-cache-database...


Seems like you were hitting: runtime: Large maps cause significant GC pauses #9477 [0]

Looks like this issue was resolved for maps that don't contain pointers by [1]. From the article, sounds like the map keys were strings (which do contain pointers, so the map would need to be scanned by the GC).

If pointers in the map keys and values could be avoided, it would have (if my understanding is correct) removed the need for the GC to scan the map. You could do this for example by replacing string keys with fixed size byte arrays. Curious if you experimented this approach?

[0] https://github.com/golang/go/issues/9477

[1] https://go-review.googlesource.com/c/go/+/3288


Finding out if that does resolve the author's issue would be interesting but I'm not sure that that would be particularly supportive data in favor of Go. If anything it would reinforce the downsides of Go's GC implementation: prone sudden pitfalls only avoidable with obtuse, error-prone fiddling that makes the code more complex.

After spending weeks fighting with Java's GC tuning for a similar production service tail latency problem, I wouldn't want to be caught having to do that again.


The good news are that Go's GC has basically no tunables, so you wouldn't have spent weeks on that. The bad news is that it has basically no tunables so if it's a tuning issue you're either fucked or have to put "tuning" hacks right into the code if you find any that works (e.g. twitch's "memory ballast" to avoid overly aggressive GC runs: https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...)


There are tradeoffs with all languages. C++ avoids the GC, but you then have to make sure you know how to avoid the common pitfalls of that language.

We use C++ at Scylla (saw that we got a shout-out in the blog! Woot!) but it's not like there isn't a whole industry about writing blogs avoiding C++ pitfalls.

C++ pitfalls through the years... • https://www.horstmann.com/cpp/pitfalls.html (1997) • https://stackoverflow.com/questions/30373/what-c-pitfalls-sh... (2008) • http://blog.davidecoppola.com/2013/09/cpp-pitfalls/ (2013) • https://www.typemock.com/pitfalls-c/ (2018)

I am not saying any of these (Go, Rust, C++, or even Java) are "right" or "wrong" per se, because that determination is situational. Are you trying to optimize for performance, for code safety, for taking advantage of specific OS hooks, or oppositely, to be generically deployable across OSes, or for ease of development? For the devs at Scylla, the core DB code is C++. Some of our drivers and utilities are Golang (like our shard aware driver). There's also a Cassandra Rust driver — it'd be sweet if someone wants to make it shard-aware for Scylla!


(Discord infra person here.)

Actually we didn't update the reference to Cassandra in the article -- the read states workload is now on Scylla too, as of last week. ;)

We'll be writing up a blog post on our migration with Scylla at some point in the next few months, but we've been super happy with it. I replaced our TokuMX cluster with it and it's faster, more reliable, _and_ cheaper (including the support contract). Pretty great for us.


Woot! Go you! (Or Rust you! Whichever you prefer!)


What a glorious combination of things! What a shame faster, more reliable and cheaper don't usually go together, but that's the challenge all developers face...


The common factor in most of my decisions to look for a new job has been realizing that I feel like a very highly compensated janitor instead of a developer.

Once I spend even the plurality of my time cleaning up messes instead of doing something new (and there are ways to do both), then all the life is sucked out of me and I just have to escape.

Telling me that I have to keep using a tool with known issues that we have to process or patches to fix would be super frustrating. And the more times we stumble over that problem the worse my confirmation bias will be.

Even if the new solution has a bunch of other problems, the set that is making someone unhappy is the one that will cause them to switch teams or quit. This is one area where management is in a tough spot with respect to rewrites.

Rewrites don't often fix many things, but if you suspect they're the only thing between you and massive employee turnover, you're between a rock and a hard place. The product is going to change dramatically, regardless of what decision you make.


While I completely agree with the "janitor" sentiment... and for Newton's sake I feel like Wall-E daily...

> Telling me that I have to keep using a tool with known issues that we have to process or patches to fix would be super frustrating.

All tools have known issues. It's just that some have way more issues than others. And some may hurt more than others.

Go has reached an interesting compromise. It has some elegant constructs and interesting design choices (like static compilation which also happens to be fast). The language is simple, so much so that you can learn the basics and start writing useful stuff in a weekend. But it is even more limiting than Java. A Lisp, this thing is not. You can't get very creative – which is an outstanding property for 'enterprises'. Boring, verbose code that makes you want to pull your teeth out is the name of the game.

And I'm saying this as someone who dragged a team kicking and screaming from Python to Go. That's on them – no-one has written a single line of unit tests in years, so now they at least get a whiny compiler which will do basic sanity checks before things blow up in prod. Things still 'panic', but less frequently.


I'll take boring over WTF code any day :-)


It's not necessarily an either-or though. I'll take clear, concise expressive code over either!


Most development jobs on products that matter involve working on large established code bases. Many people get satisfaction from knowing that their work matters to end users, even if it's not writing new things in the new shiny language or framework. Referring to these people as "janitors" is pretty damn demeaning, and says more about you than the actual job. Rewrites are rarely the right call, and doing simply to entertain developers is definitely not the right call.


>Referring to these people as "janitors" is pretty damn demeaning,

"Referring to the term of "janitors" as demeaning is pretty demeaning and says more about you than your judgement of the parent."

I don't like this rhetoric device you just used.

Also, I think that janitors do important work as well.


Let's not fool ourselves.

The demeaning of janitors was introduced by GP by describing it as something they would rather not do.

No mental gymnastics required.


He said he felt like a janitor, next guy said he demeaned others as janitors, and now you are saying he demeaned janitors. There is a level of gymnastics going on.


First paragraph:

> The common factor in most of my decisions to look for a new job has been realizing that I feel like a very highly compensated janitor instead of a developer.

So for that person, feeling like a janitor is incentive for seeking a new job. It's that simple really.


That doesn't mean he is demeaning janitors, just that he doesn't want to be one. There are loads of reasons to not want to be a "code janitor" besides looking down at janitors.


I'm not a gymnast, but comparing people comparing their work to janitors and calling it gymnastics demeans gymnasts.

/s


It helps if your think gardener instead of janitor.


For any tracing GC, costs are going to be proportional to the number of pointers that need to be traced. So I would not call reducing the use of pointers to ameliorate a GC issue "obtuse, error-prone fiddling". On the contrary, it seems like one of the first approaches to look at when faced with the problem of too much GC work.

Really all languages with tracing GC are at a disadvantage when you have a huge number of long-lived objects in the heap. The situation is improved with generational GC (which Go doesn't have) but the widespread use of off-heap data structures to solve the problem even in languages like Java with generational GC suggests this alone isn't a good enough solution.

In Go's defense, I don't know another GC'ed language in which this optimization is present in the native map data structure.


Except that plenty of languages with tracing GC have also off GC memory allocation.

Since you mention not knowing such languages, have a look at Eiffel, D, Modula-3, Active Oberon, Nim, C#/F# (specially after the latest improvements).

Also Java will eventually follow the same idea as Eiffel (where inline classes are similar to expanded classes in Eiffel), and ByteBuffers can be off-GC heap.


Maybe that is what they hit... but it seems there is a pretty healthy chance they could have resolved this by upgrading to a more modern runtime.

Go 1.9 is fairly old (1.14 is about to pop out), and there have been large improvements on tail latency for the Go GC over that period.

One of the Go 1. 12 improvements in particular seems to at least symptomatically line up with what they described, at least at the level of detail covered in the blog post:

https://golang.org/doc/go1.12#runtime

“Go 1.12 significantly improves the performance of sweeping when a large fraction of the heap remains live.“


Everything I've read indicates that RAM caches work poorly in a GC environment.

The problem is that garbage collectors are optimized for applications that mostly have short-lived objects, and a small amount of long-lived objects.

Things like large in-RAM LRU are basically the slowest thing for a garbage collector to do, because the mark-and-sweep phase always has to go through the entire cache, and because you're constantly generating garbage that needs to be cleaned.


> The problem is that garbage collectors are optimized for applications that mostly have short-lived objects, and a small amount of long-lived objects.

I think it's not quite that.

Applications typically have a much larger old generation than young generation, i.e. many more long lived objects than short lived objects. So GCs do get optimized to process large heaps of old objects quickly and efficiently, e.g. with concurrent mark/sweep.

However as an additional optimization, there is the observation that once an application has reached steady state, most newly allocated objects die young (think: the data associated with processing a single HTTP request or user interaction in a UI).

So as an additional optimization, GCs often split their heap into a young and an old generation, where garbage collecting the young generation earlier/more frequently overall reduces the mount of garbage collection done (and offsets the effort required to move objects around).

In the case of Go though, the programming language allows "internal pointers", i.e. pointers to members of objects. This makes it much harder (or much more costly) to implement a generational, moving garbage collector, so Go does not actually have a young/old generation split nor the additional optimization for young objects.


In this[1] video at about 32 min, mark there is a discussion on GC and apps that do caching.

[1] https://www.youtube.com/watch?v=VCeHkcwfF9Q


Which is why on GC languages that also support value types and off GC-heap allocations, one makes use of them, instead of throwing out the baby with the water.


A high number of short lived allocations is also a bad thing in a compacting GC environment, because every allocation gets you a reference to a memory region touched very long time ago and it is likely a cache miss. You would like to do an object pool to avoid this but then you run into a pitfall with long living objects, so there is really no good way out.


???

The allocation is going to be close to the last allocation, which was touched recently, no? The first allocation after a compaction wii be far from recent allocations, but close to the compacted objects?


Close to the last allocation doesn't matter. What matters is the memory returned to the application - and this is memory that has been touched long ago and unlikely in cache. If your new generation size is larger than L3 cache it will have to be fetched from main memory for sure every time you start the next 64 bytes. I believe a smart cpu will notice the pattern and will prefetch to reduce cache miss latency. But a high allocation rate will use a lot of memory bandwidth and would thrash the caches.

An extreme case of that problem happens when using GC in an app that gets swapped out. Performance drops to virtually zero then.


The article also mentions the service was on go 1.9.2, which was released 10/2017. I'd be curious to see if the same issues exist on a build based on a more recent version of Go.


I was thinking that if their cache is just one large hash table, essentially an array of structs, the GC wouldn't need to scan it. What you say about strings contained in the map would explain their problems, however I don't see the reason for it. Wouldn't you make sure every identifier uses a fixed-length GUID or similar, which would be contained in such a struct used in the array-of-structs?


Ok but in rust those pointers can just be borrowed obviating the need for gc at all.


Given it's a cache the entries would not have an existing natural owner… except for the cache itself.

There would be no need for a GC to traverse the entire map, but that's because rust doesn't use a GC.


While Rust does not have a discrete runtime GC process, it does utilize reference counting for dynamic memory cleanup.

So you could argue that they are still going to suffer some of the downsides of a GC'ed memory allocation. Some potential issues include non-deterministic object lifespan, and ensuring that any unsafe code they write which interacts with the cache does the "right thing" with the reference counts (potentially including de-allocation; I'm not sure what unsafe code needs to do when referencing reference counted boxes).


> While Rust does not have a discrete runtime GC process, it does utilize reference counting for dynamic memory cleanup.

That's so misleading as to essentially be a lie.

Rust uses reference counting if and only if you opt into it via reference-counted pointers. Using Rc or Arc is not the normal or default course of action, and I'm not aware of any situation where it is ubiquitous.

> So you could argue [nonsense]

No, you really could not.


On the other hand, Rust's RAII management model behaves similarly to a reference counting system where the counts are limited to 0 and 1 (well, for a loose approximation of the 0 state), right?


Some people say this, but I think it's misleading; refcounting can make things live longer, but the borrow checker cannot.


Well, with ownership, a move can make things "live" longer, I guess.


You're not wrong. I just think there's enough difference that the analogy doesn't really work.


RAII for references (pointers) is a no-op. If the cache returns references to the data in its own array there is no overhead.


I was making an assumpotion that using a vector of ARC<T> would be the best way to handle a global LRU cache. Perhaps I should have specified it, but it seemed pretty obvious. Sorry if it wasn’t.

If there’s a better way to handle a global LRU cache, I’m all ears.


Assuming only one thread at a time needs to access the LRU cache (not hard with the shared-nothing message passing architecture which we employ here), the lifetime of the object being checked out from the cache is able to be understood at compile time, and we can just use the borrow checker to ensure that it remains that way (we've got a mutable reference to the LRU, and we can use that to get a mutable reference to an object within the LRU. By the time the function that is mutating the data in the LRU finishes, the references to the objects must be dead (the borrow checker will enforce that.) Since all this information is available during compile time, runtime ref-counting (via rc/arc) is not necessary.

This is made possible by rust's memory model, where it understands ownership of data, and the lifetime of each reference that's being taken from that owned data. This means that the compiler can statically determine how long an object needs to live, and that references to the object don't outlive the owned data. For use-cases where the lifetime of references are able to be statically understood, an arc/rc is not required. This blog-post goes into it in much better detail than I can: https://words.steveklabnik.com/borrow-checking-escape-analys...


Yes, I'm quite familiar with rust's borrow checking model. I've programmed some in rust, and the rest has been beaten into my head quite thoroughly by Rustaceans. I don't care for Rust, but I understand it.

Locking on one thread at a time seems like a pretty obvious performance flaw. It just doesn't seem like an appropriate design for the given workload (lots of requests, lots of stored items, largely write-only (except for its position in the queue)). It would make a lot more sense to grant multiple threads access the LRU at any given time.

And early optimization and all that aside, creating the LRU in such a way that it can be easily restricted to one thread or opened up makes the most sense to me. Otherwise, you get to re-write the LRU (and all the code which accesses it) if it should be identified as a bottleneck.

Of course, I'm not responsible for the code or truly involved in the design process, so my perspective may be limited.


In practice, for our service, most of our CPU time is not spent in data mutation, but rather networking and serialization (this is btw, the same conclusion Redis came to when they added "multi-threading".)

You can scale-out by running multiple instances of the service (shared-nothing, N many depending on how cores you want to run on.) Or, you can do message-passing between cores.

In this case, we have 2 modes of scale-up/out (add more nodes to the cluster, or add more shared-nothing LRU caches that are partitioned internally that the process runs, allowing for more concurrency).

We however only run one LRU per node, as it turns out that the expensive part is not the bottleneck here, nor will it probably ever be.


what kind of design do you have in mind? I assume you don't mean simultaneous reads/writes from multiple threads without synchronization - yolo! there's a lot of possible designs, mutex, read/write lock, concurrent hashmap. I've never worked on an LRU cache, asking because interested in what plays well in that use case, and how you would approach it in another language.


I think you're confusing Rust's ownership model with Swift's ARC. Rust doesn't do reference counting unless you use Rc<T> or Arc<T>.


Given the model of memory we are discussing (a global per-process LRU cache), that’s exactly what I was discussing using. Unless there’s another way to handle such global caches.


Tokio author here (mentioned in blog post). It is really great to see these success stories.

I also think it is great that Discord is using the right tool for the job. It isn't often that you need the performance gains that Rust & Tokio so pick what works best to get the job done and iterate.


Basically because of:

> Rust is blazingly fast and memory-efficient: with no runtime or garbage collector, it can power performance-critical services, run on embedded devices, and easily integrate with other languages.


No offense to Tokio and Rust, I really like Rust, but having someone rewriting their app because of performance limitations in their previous language choice, isn’t really someone picking the right tool for the job necessary.

I’m not so sure they would have done the rewrite if the Go GC was performing better, and the choice of Rust seems primarily based on prior experience at the company writing performance sensitive code rather than delivering business value.


Correct. They wouldn't have considered Rust if the GC was performing better. They also wouldn't have even adopted Go if Elixir was sufficient. This team seems to have an incredible talent pool who is willing to push further for the sake of, as you say, delivering business value. Improving UX, investing in capabilities for growth, are valid business reasons why they're iterating over so many solutions. It's really impressive to see what they're accomplishing.


Right tool for the job should also take into account the experience of the devs you have at your disposal. For an omniscient Dev, is Rust the best tool for the job? Unsure. But for them with already significant rust experience? Sounds like it.


too much focus on "business value" often ends-up with codebase in a state that makes delivery of that business value pretty impossible. Boeing was delivering a lot of business value with MAX ...


THIS !!! this is so underrated


> Changing to a BTreeMap instead of a HashMap in the LRU cache to optimize memory usage.

Collections are one of the big areas where Go's lack of generics really hurts it. In Go, if one of the built in collections does not meet your needs, you are going to take a safety and ergonomic hit going to a custom collection. In Rust, if one of the standard collections does not meet your needs, you (or someone else) can create a pretty much drop-in replacement that does that has similar ergonomic and safety profiles.


I'm not sure what you mean by standard collections, but BTreeMap is in Rust's standard library.


I think the point the GP is trying to make is that there’s no reason why BTreeMap couldn’t be an external crate, while only the core Go collections are allowed to be generic.

A corollary to this is that adding more generic collections to Go’s standard library implies expanding the set of magical constructs.


Rust has it's lot of weird hacks too. E.g array can take traits impls only if they have less than 32 elements... https://doc.rust-lang.org/std/array/trait.LengthAtMost32.htm...


That's… not that at all. You can absolutely implement traits for arrays of more than 32 elements[0].

It is rather that due to a lack of genericity (namely const generics) you can't implement traits for [T;N], you have to implement them for each size individually. So there has to be an upper bound somehow[1], and the stdlib developers arbitrarily picked 32 for stdlib traits on arrays.

A not entirely dissimilar limit tends to be placed on tuples, and implementing traits / typeclasses / interfaces on them. Again the stdlib has picked an arbitrary limit, here 12[2], the same issue can be seen in e.g. Haskell (where Show is "only" instanced on tuples up to size 15).

These are not "weird hacks", they're logical consequences of memory and file size not being infinite, so if you can't express something fully generically… you have to stop at one point.

[0] here's 47 https://play.rust-lang.org/?version=stable&mode=debug&editio...

[1] even if you use macros to codegen your impl block

[2] https://doc.rust-lang.org/src/core/fmt/mod.rs.html#2115


Also worth noting that Rust's const generics support has progressed to the point that the stdlib is already using them to implement the standard traits on arrays; the 32-element issue still technically exists, but only because the stdlib is manually restricting the trait implementation so as to not accidentally expose const generics to stable Rust before const generics is officially stabilized.


That's a completely different and much more minor issue (red herring, more or less) than eschewing the one core language feature that makes performant type-safe custom data structures possible.


This is purely temporary; it used to be less hacky, but in order to move to the no-hacks world, we had to make it a bit more hacky to start.


In Go, standard collections are compiler's magic while in Rust or e.g. C++ - they are implemented as libraries.


I like to think it's a tradeoff; limit the language and standard library and you limit the amount of things you have to consider. That is, 99% of applications probably won't need a BTree.

(anecdotal: in Java I've never needed anything else than a HashMap or an ArrayList)


If you have a problem at hand which does not really benefit from the presence of a garbage collector, switching to an implementation without a garbage collector has quite a potential to be at least somewhat faster. I remember myself to run onto this time trigger for garbage collection long in the past - though I don't remember why and mostly forgot about ever since until I read this article. As also written in the article, even if there are no allocations going on, Go forces a gc every two minutes, it is set here: https://golang.org/src/runtime/proc.go#L4268

The idea for this is (if I remember correctly) to be able to return unused memory to the OS. As returning memory requires a gc to run, it is forced in time intervals. I am a bit surprised that they didn't contact the corresponding Go developers, as they seem to be interested in practical use cases where the gc doesn't perform well. Besides that newer Go releases improved the gc performance, I am a bit surprised that they didn't just increase this time interval to an arbitrary large number and checked, if their issues went away.


Not only is there good potential for a speed improvement, but languages built around the assumption of pervasive garbage collection tend not to have good language constructs to support manual memory management.

To be fair, most languages without GCs also don't have good language constructs to support manual memory management. If you're going to make wide use of manual memory management, you should think very carefully about how the language and ecosystem you're using help or hinder your manual memory management.


This seems like a nice microservices success story. It's so easy to replace a low-performing piece of infrastructure when it is just a component with a well-defined API. Spin up the new version, mirror some requests to see how it performs, and turn off the old one. No drama, no year-long rewrites. Just a simple fix for the component that needed it the most.


You don't need microservices for that, though. One might as well have moved that piece into a library.


And then deal with cross-language FFI boundaries and cross-language builds.


This is what clicked for me on microservices years back. That the language wasn’t important and if I couldn’t do it in python or C, someone else could in Go or Java or etc.

Compared to if I wrote something in house entirely in C... lolno


Landing in a shop that uses N programming languages for N microservices would be a pretty miserable experience.


I've seen quite a few environments, and usually there's only a limited current set of tech the devs are allowed to use, and if that's not the case, I try to enforce this, but this set should evolve depending on the needs.

The main issue however is manpower. At my current client, one of the technologies still actively being used for this reason is PHP (which is a horrible fit for microservices for a lot of reasons), because they have a ton of PHP devs employed, and finding a ton of (good) people with something more fitting like Go or Rust knowledge is hard and risky and training costs a lot of money (and more importantly: time)...


I can buy this for Rust but if people have issues picking up Go quickly ...


Well, picking up the language itself is one thing (and I agree, that's quite easy with Go), but getting familiar with the ecosystem, best practices and avoiding habits from other languages? That's an entirely different thing.

And that's also how management usually sees it, and if they're smart they also realise that the first project using an unfamiliar technology is usually one to throw away.


At a previous job we used Python for all microservices, except for 'legacy' systems which were in Groovy / Rails. That was a context switch if I ever experienced one.


Once you experience protocol buffers it becomes hard to go back.


How is a serde IDL relevant to microservices? XDR would do just as well, so would JSON.


Because protocol buffers are actually pleasant to use.


RPC is pretty much also a "cross-language FFI boundary", except it can fail.

Of course it has some advantages, but it's hardly universally better.


> After digging through the Go source code, we learned that Go will force a garbage collection run every 2 minutes at minimum. In other words, if garbage collection has not run for 2 minutes, regardless of heap growth, go will still force a garbage collection.

> We figured we could tune the garbage collector to happen more often in order to prevent large spikes, so we implemented an endpoint on the service to change the garbage collector GC Percent on the fly. Unfortunately, no matter how we configured the GC percent nothing changed. How could that be? It turns out, it was because we were not allocating memory quickly enough for it to force garbage collection to happen more often.

As someone not too familiar with GC design, this seems like an absurd hack. That this 2-minute hardcoded limitation is not even configurable comes across as amateurish even. I have no experience with Go -- do people simply live with this and not talk about it?


Funnily enough, something similar happened at Twitch regarding their API front end written in Go: https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...


Interesting, they went a totally different route.

> The ballast in our application is a large allocation of memory that provides stability to the heap.

> As noted earlier, the GC will trigger every time the heap size doubles. The heap size is the total size of allocations on the heap. Therefore, if a ballast of 10 GiB is allocated, the next GC will only trigger when the heap size grows to 20 GiB. At that point, there will be roughly 10 GiB of ballast + 10 GiB of other allocations.


Wow, that puts Discord's "absurd hack" into perspective! I feel like the moral here is a corollary to that law where people will depend on any observable behavior of the implementation: people will use any available means to tune important performance parameters; so you might as well expose an API directly, because doing so actually results in less dependence on your implementation details than if people resort to ceremonial magic.


I mean if you read Twitch's hack they intentionally did it in code so they didn't need to tune the GC parameter. They wanted to avoid all environment config.


I missed that part. I thought they would use a parameter if it were available, because they said this:

> For those interested, there is a proposal to add a target heap size flag to the GC which will hopefully make its way into the Go runtime soon.

What's wrong with the existing parameter?

I'm sure they aren't going this far to avoid all environment config without a good reason, but any good reason would be a flaw in some part of their stack.


summary: Go 1.5, memory usage (heap) of 500MB on a VM with 64GiB of physical memory, with 30% of CPU cycles spent in function calls related to GC, and unacceptable problems during traffic spikes. Optimisation hack to somewhat fix problem was to allocate 10GiB, but not using the allocation at all, which caused a beneficial change in the GC behaviour!


With recent Go releases, GC pauses have become neglible for most applications. So this should not get into your way. However, it can easily tweaked, if needed. There is runtime.ForceGCPeriod, which is a pointer to the forcegcperiod variable. A Go program, which really needs to change this, can do it, but most programs shouldn't require this.

Also, it is almost trivial to edit the Go sources (they are included in the distribution) and rebuild it, which usually takes just a minute. So Go is really suited for your own experiments - especially, as Go is implemented in Go.


runtime.ForceGCPeriod is only exported in testing, so you wouldn't be able to use it in production. But as you said, the distribution could easily be modified to fit their needs.


Thanks, didn't catch that this is for testing only.


> especially, as Go is implemented in Go.

Well, parts of it. You can't implement "make" or "new" in Go yourself, for example.


You have to distinguish between the features available to a Go program as the user writes it and the implementation of the language. The immplementation is completely written in Go (plus a bit of low-level assembly). Even if the internals of e.g. the GC are not visible to a Go program, the GC itself is implemented in Go and thus easily readeable and hackeable for experienced Go programmers. And you can quickly rebuild the whole Go stack.


> You have to distinguish between the features available to a Go program as the user writes it and the implementation of the language.

I do, I'm just objecting to "Go is implemented in Go".


This reminds me of the ongoing saga of RUSTC_BOOTSTRAP[0][1]

The stable compiler is permitted to use unstable features in stable builds, but only for compiling the compiler. In essence, there are some Rust features that are supported by the compiler but only permitted to be used by the compiler. Unsurprisingly, various non-compiler users of Rust have decided that they want those features and begun setting the RUSTC_BOOTSTRAP envvar to build things other than the compiler, prompting consternation from the compiler team.

[0] https://github.com/rust-lang/cargo/issues/6627 [1] https://github.com/rust-lang/cargo/issues/7088


This is not entirely correct. These things that "can only be used by the compiler" are nightly features that haven't been stabilized yet. Some of them might never be stabilized, but you could always use them in a nightly conpiler, stability assurances just fly out the window then. This is also why using that environment variable is highly discouraged: it breaks the stability guarantees of the language and you're effectively using a pinned nightly. This is reasonable only in a very small handful of cases.


Yep. Beyond that, there is at least one place[0] where the standard library uses undefined behavior "based on its privileged knowledge of rustc internals".

[0]: https://doc.rust-lang.org/src/std/io/mod.rs.html#379


I don't see what is incorrect? Perhaps I was insufficiently clear that when I said "the compiler" I meant "the stable compiler" as opposed to more generally all possible versions of rustc. The stable compiler is permitted to use unstable features for its own bootstrap, example being the limited use of const generics to compile parts of the standard library.


But on what basis? What part of Go isn't implemented in Go?


I gave an example a bit further up, here [1].

[1] https://news.ycombinator.com/item?id=22240223


But this isn't a contradiction to the statement, that Go is implemented in Go. If you look at the sources of the Go implementation, the source code is 99% Go, with a few assembly functions (most for optimizations not performed by the compiler) and no other programming language used.


That's not correct. The implementation of "make", for example, looks like Go but isn't - it relies on internal details of the gc compiler that isn't part of the spec [1]. That's why a Go user can't implement "make" in Go.

[1] https://golang.org/ref/spec


In which language do you think is "make" implemented?


If I may interject: I believe you are both trying to make orthogonal points. calcifer's is trying to say that some features of Go are compiler intrinsics, and cannot be implemented as a library. You are making a different point, which is that those intrinsics are implemented in Go, the host language. Both statement can be true at the same time, but I agree that the terms were not used entirely accurately, causing confusion.


And yet, maps and slices are implemented in Go.

https://golang.org/src/runtime/map.go

https://golang.org/src/runtime/slice.go

I don't see why you couldn't do something similar in your own Go code. It just won't be as convenient to use as the compiler wouldn't fill in the type information (element size, suitable hash function, etc.) for you. You'd have to pass that yourself or provide type-specific wrappers invoking the unsafe base implementation. More or less like you would do in C, with some extra care to abide by the rules required for unsafe Go code.


Nothing you wrote contradicts what I said. You can't implement "make" in Go. The fact that you can implement some approximation of it with a worse signature and worse runtime behaviour (since it won't be compiler assisted) doesn't make it "make".


You still may have significant CPU overhead from the GC e.g. the twitch article (mentioned elsewhere in comments) measured 30% CPU used for GC for one program (Go 1.5 I think).

Obviously they consider spending 50% more on hardware is a worthwhile compromise for the gains they get (e.g. reduction of developer hours and reduced risk of security flaws or avoiding other effects of invalid pointers).


In this case, as they were running into the automatic GC interval, their program did not create much, if any garbage. So the CPU overhead for the GC would have been quite small.

If you do a lot of allocations, the GC overhead rises of course, but also would the effort of doing allocations/deallocations with a manual managing scheme. In the end it is a bit trade-off, what fits the problem at hand best. The nice thing about Rust is, that "manual" memory management doesn't come at the price of program correctness.


Languages that have GC frequently rely on heap allocation by default and make plenty of allocations. Languages with good manual memory management frequently rely on stack allocation and give plenty of tools to work with data on the stack. Automatic allocation on the stack is almost always faster than the best GC.


GC languages often do and also often do not. Most modern GC languages have escape analysis. So if the compiler can deduct that an object does not escape the current scope, it is stack allocated instead of heap allocated. Modern JVMs do this and Go does this also. Furthermore, Go is way more allocation friendly than e.g. Java. In Go an array of structs is a single item on the heap (or stack). In Java, you would have an array of pointers to separately allocate on the heap (Java is just now trying to rectify this with the "record" types). Also, structs are passed by value instead of reference.

As a consequence, the heap pressure of a Go program is not necessarily significantly larger than that of an equivalent C or Rust program.


Escape analysis is very limited and what I found in practice, it often doesn't work in real code, where not all the things are inlined. If a method allocates an object and returns it 10 layers up, EA can't do anything.

In contrary, in e.g. C I can wrap two 32-bit fields in a struct and freely pass then anywhere with zero heap allocations.

Also, record types are not going to fix the pointer chasing problem with arrays. This is promised by Valhalla, but I'be been hearing about it for 3 years or more now.


It does sound like Discord's case was fairly extraordinary in terms of the degree of the spike:

> We kept digging and learned the spikes were huge not because of a massive amount of ready-to-free memory, but because the garbage collector needed to scan the entire LRU cache in order to determine if the memory was truly free from references.

So maybe this is one of those things that just doesn't come up in most cases? Maybe most services also generate enough garbage that that 2-minute maximum doesn't really come into play?


Games written in the Unity engine are (predominately) written in C#, a garbage collected language. Keeping large amounts of data around isn't that unusual since reading from disk is often prohibitively slow, and it's normal to minimize memory allocation/garbage generation (using object pools, caches etc), and manually trigger the GC in loading screens and in other opportune places (as easy as calling System.GC.Collect()). At 60 fps each frame is about 16ms. You do a lot in those 16ms, adding a 4ms garbage collection easily leads to dropping a frame. Of course whether that matters depends on the game, but Unity and C# seem to handle it well for the games that need tiny or no GC pauses.

But (virtually) nobody is writing games in Go, so it's entirely possible that it's an unusual case in the Go ecosystem. Being an unsupported usecase is a great reason to switch language.


If there's an example of getting great game performance with a GC language, Unity isn't it. Lots of Unity games get stuttery, and even when they don't, they seem to use a lot of RAM relative to game complexity. Kerbal Space Program even mentioned in their release notes at one point something about a new garbage collector helping with frame rate stuttering.

I started up KSP just now, and it was at 5.57GB before I even got to the main menu. To be fair, I hadn't launched it recently, so it was installing its updates or whatever. Ok, I launched it again, and at the main menu it's sitting on 5.46GB. (This is on a Mac.) At Mission Control, I'm not even playing the game yet, and the process is using 6.3GB.

I think a better takeaway is that you can get away with GC even in games now, because it sucks and is inefficient but it's ... good enough. We're all conditioned to put up with inefficient software everywhere, so it doesn't even hurt that much anymore when it totally sucks.


Right; Go is purpose-built for writing web services, and web services tend to be pretty tolerant of (small) latency spikes because virtually anyone who's calling one is already expecting at least some latency


> Go is purpose-built for writing web services

Is this true? Go was built specifically for C++ developers, which, even when Go was first release, was a pretty unpopular language for writing web services (though maybe not at Google?). That a non-trivial number of Ruby/Python/Node developers switched was unexpected. (1)

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


your quote is just corroborating what the reply is saying. go was written for web services which were written in c++ at google.


The linked article doesn't say anything about web services. Just C++. I believe Rob Pike was working on GFS and log management, and Go was always initially pitched at system programming (which is not web services).

> Our target community was ourselves, of course, and the broader systems-programming community inside Google. (1)

(1) http://www.informit.com/articles/article.aspx?p=1623555


C# uses a generational gc iirc so it may be better suited for a system where you have a relativly stable collection that does not need to be fully garbage collected all the time and have a smaller and more volitile set of objects that will be gc'ed more often. I don't think the current garbage collector in go does anything similar to that.


Yeah, that's the ideal pattern in C#. You have to be smart-ish about it, but writing low GC pressure code can be easier than you think. Keep your call stacks shallow, avoid certain language constructs (i.e. LINQ) or at least know when they really make sense for the cost (async).

IDK if this is true for earlier versions, but as of today C# has pretty clear rules: 16MB in desktop or 64MB in server (which type is used can be set via config) will trigger a full GC [1]. Note that less than that may trigger a lower level GC, but those are usually not the ones that are noticed. I'm guessing at least some of that is because of memory locality as well as the small sizes.

On the other hand, in a lot of the Unity related C# posts I see on forums/etc, passing structs around is considered the 'performant' way to do things to minimize GC pressure.

[1] https://docs.microsoft.com/en-us/dotnet/standard/garbage-col... [2] https://blog.golang.org/ismmkeynote


This might have changed with more recent updates, but I was under the impression that the Mono garbage collector in Unity was a bit dated and not as up-to-date as a C# one today.


Unity has recently added the "incremental GC" [1] which spreads the work of the GC over multiple frames. As I understand it this has a lower overall throughput, but _much_ better worst case latency.

[1] https://blogs.unity3d.com/2018/11/26/feature-preview-increme...


Heap caches that keep things longer than a GC cycle are terrible under GC unless you have a collector in the new style like ZGC, Azul or Shenandoah.


Systems with poor GC and the need to keep data for lifetimes greater than a request should have an easy to use off heap mechanism to prevent these problems.

Often something like Redis is used as a shared cache that is invisible to the garbage collector, there is a natural key with a weak reference (by name) into a KV store. One could embed a KV store into an application that the GC can't scan into.


100%. In Java, you would often use OpenHFT's ChronicleMap for now and hopefully inline classes/records in Java 16 or so.


Ehcache has an efficient off-heap store: https://github.com/Terracotta-OSS/offheap-store/

Doesn't Go have something like this available? It's an obvious thing for garbage-collected languages to have.


You can usually resort to `import C` and using `C.malloc` to get an unsafe.Pointer.


What feature of "the new style" makes them more suitable in this case?


They have very short pause times even for very large heaps with lots of objects in them as they don't have to crawl the entire live tree when collecting.


A GC scan of a large LRU (or any large object graph) is expensive in CPU terms because the many of the pointers traversed will not be in any CPU cache. Memory access latency is extremely high relative to how fast CPUs can process cached data.

You could maybe hack around the GC performance without destroying the aims of LRU eviction by batching additions to your LRU data structure to reduce the number of pointers by a factor of N. It's also possible that a Go BTree indexed by timestamp, with embedded data, would provide acceptable LRU performance and would be much friendlier on the cache. But it might also not have acceptable performance. And Go's lack of generic datastructures makes this trickier to implement vs Rust's BtreeMap provided out of the box.


Yes, this is a maximally pessimal case for most forms of garbage collection. They don't say, but I would imagine these are very RAM-heavy systems. You can get up to 768GB right now on EC2. Fill that entire thing up with little tiny objects the size of usernames or IDs for users, or even merely 128GB systems or something, and the phase where you crawl the RAM to check references by necessity is going to be slow.

This is something important to know before choosing a GC-based language for a task like this. I don't think "generating more garbage" would help, the problem is the scan is slow.

If Discord was forced to do this in pure Go, there is a solution, which is basically to allocate a []byte or a set of []bytes, and then treat it as expanse of memory yourself, managing hashing, etc., basically, doing manual arena allocation yourself. GC would drop to basically zero in that case because the GC would only see the []byte slices, not all the contents as individual objects. You'll see this technique used in GC'd languages, including Java.

But it's tricky code. At that point you've shucked off all the conveniences and features of modern languages and in terms of memory safety within the context of the byte expanses, you're writing in assembler. (You can't escape those arrays, which is still nice, but hardly the only possible issue.)

Which is, of course, where Rust comes in. The tricky code you'd be writing in Go/Java/other GC'd language with tons of tricky bugs, you end up writing with compiler support and built-in static checking in Rust.

I would imagine the Discord team evaluated the option of just grabbing some byte arrays and going to town, but it's fairly scary code to write. There are just too many ways to even describe for such code to end up having a 0.00001% bug that will result in something like the entire data structure getting intermittently trashed every six days on average or something, virtually impossible to pick up in testing and possibly even escaping canary deploys.

Probably some other languages have libraries that could support this use case. I know Go doesn't ship with one and at first guess, I wouldn't expect to find one for Go, or one I would expect to stand up at this scale. Besides, honestly, at feature-set maturity limit for such a library, you just end up with "a non-GC'd inner platform" for your GC'd language, and may well be better off getting a real non-GC'd platform that isn't an inner platform [1]. I've learned to really hate inner platforms.

By contrast... I'd bet this is fairly "boring" Rust code, and way, way less scary to deploy.

[1]: https://en.wikipedia.org/wiki/Inner-platform_effect


> I don't think "generating more garbage" would help

To be clear: I wasn't suggesting that generating garbage would help anyone. Only that in a more typical case, where more garbage is being generated, the two minute interval itself might never surface as the problem because other things are getting in front of it.


It comes from a desire to run in the exact opposite direction as the JVM, which has options for every conceivable parameter. Go has gone through a lot of effort to keep the number of configurable GC parameters to 1.


Anyone who pushes the limits of a machine needs tuning options. If you can't turn knobs you have to keep rewriting code until you happen to get the same effect.


There's definitely a happy medium. One setting may indeed be too few, but JVM's many options ends in mass cargo-cult copypasta, often leading to really bad configurations.


Haven’t really seen anyone trying to use JVM options to get performance benefits without benchmarks for their specific use case the last 10 years or so.


Tuning options don't work well with diverse libraries, though. If you use 2 libraries and they both are designed to run with radically different tuning options what do you do? Some bad compromise? Make one the winner and one the loser? The best you can do is do an extensive monitoring & tuning experiment, but that's quite involved as well and still won't get you the maximum performance of each library, either.

At least with code hacking around the GC's behavior that code ends up being portable across the ecosystem.

There doesn't seem to really be a good option here either way. This amount of tuning-by-brute-force (either by knobs or by code re-writes) seems to just be the cost of using a GC.


That might be true, but from a language design PoV it isn't convincing to have dozens of GC-related runtime flags a la Java/JVM. If you need those anyway, this might point to pretty fundamental language expressivity issues.


[flagged]


This was the first time I've seen that annoying cAsE meme on HN and I pray it's the last. It is a lazy way to make your point, hoping your meme-case does all the work for you so that you don't have to say anything substantial.

Or do you think it adds to the discussion?


It indicates a mocking, over-the-top tone to indicate the high level of contempt I have for my originally-stated paraphrase (and the people who have caused software dev decisionmaking to be that way). So yes, I think it does add to the discussion.


It's annoying to read, so while it does get across the mocking tone, the reaction of annoyance at the author is far stronger.


Surely tuning some GC parameters is less effort that having to do a rewrite in another language.


If you want to force it you can call "runtime.GC()" but that's almost always a step in the wrong direction.

It is worth it to read and understand: https://blog.twitch.tv/en/2019/04/10/go-memory-ballast-how-i...


I think for most applications (especially the common use-case of migrating a scripting web monolith to a go service), people just aren't hitting performance issues with GC. Discord being a notable exception.

If these issues were more common, there would be more configuration available.

[EDIT] to downvoters: I'm not saying it's not an issue worth addressing (and it may have already been since they were on 1.9), I was just answering the question of "why this might happen"


Or, in the case of latency, just wait a few months because the Go team obsesses about latency (no surprise from a Google supported language). Discord's comparison is using Go1.9. Their problem may well have been addressed in Go1.12. See https://golang.org/doc/go1.12#runtime.


You are able to disable GC with:

  GOGC=off
As someone mentions below.

More details here: https://golang.org/pkg/runtime/


Keeping GC off for a long running service might become problematic. Also, the steady state might have few allocations, but startup may produce a lot of garbage that you might want to evict. I've never done this, but you can also turn GC off at runtime with SetGCPercent(-1).

I think with that, you could turn off GC after startup, then turn it back on at desired intervals (e.g. once an hour or after X cache misses).

It's definitely risky though. E.g. if there is a hiccup with the database backend, the client library might suddenly produce more garbage than normal, and all instances might OOM near the same time. When they all restart with cold caches, they might hammer the database again and cause the issue to repeat.


> ...all instances might OOM near the same time.

CloudFront, for this reason, allocates heterogeneous fleets in its PoPs which have diff RAM sizes and CPUs [0], and even different software versions [1].

> When they all restart with cold caches, they might hammer the database again and cause the issue to repeat.

Reminds me of the DynamoDB outage of 2015 that essentially took out us-east-1 [2]. Also, ELB had a similar outage due to unending backlog of work [3].

Someone must write a book on design patterns for distributed system outages or something?

[0] https://youtube.com/watch?v=pq6_Bd24Jsw&t=50m40s

[1] https://youtube.com/watch?v=n8qQGLJeUYAt=39m0s

[2] https://aws.amazon.com/message/5467D2/

[3] https://aws.amazon.com/message/67457/


Google's SRE book covers some of this (if you aren't cheekily referring to that). E.g. chapters 21 and 22 are "Handling Overload" and "Addressing Cascading Failures". The SRE book also covers mitigation by operators (e.g. manually setting traffic to 0 at load balancer and ramping back up, manually increasing capacity), but it also talks about engineering the service in the first place.

This is definitely a familiar problem if you rely on caches for throughput (I think caches are most often introduced for latency, but eventually the service is rescaled to traffic and unintentionally needs the cache for throughput). You can e.g. pre-warm caches before accepting requests or load-shed. Load-shedding is really good and more general than pre-warming, so it's probably a great idea to deploy throughout the service anyway. You can also load-shed on the client, so servers don't even have to accept, shed, then close a bunch of connections.

The more general pattern to load-shedding is to make sure you handle a subset of the requests well instead of degrading all requests equally. E.g. processing incoming requests FIFO means that as queue sizes grow, all requests become slower. Using LIFO will allow some requests to be just as fast and the rest will timeout.


Your comment reminds me of this excellent ACM article by Facebook on the topic: https://queue.acm.org/detail.cfm?id=2839461

I've read the first SRE book but having worked on large-scale systems it is impossible to relate to the book or internalise the advice/process outlined in it unless you've been burned by scale.

I must note that there are two Google SRE books in-circulation, now: https://landing.google.com/sre/books/


How does Go allow you to manage memory manually? Malloc/free or something more sophisticated?


It doesn't. If you disable the GC… you only have an allocator, the only "free" is to run the entire GC by hand (calling runtime.GC())


So other comments didn't mention this, per se, but Go gives you tools to see what memory escapes the stack and ends up being heap allocated. If you work to ensure things stay stack allocated, it gets freed when the stack frees, and the GC never touches it.

But, per other comments, there isn't any direct malloc/free behavior. It just provides tools to help you enable the compiler to determine that GC is not needed for some.


It doesn't. You could start and stop the GC occasionally, maybe?


Typically a GC runtime will do a collection when you allocate memory, probably when the heap size is 2x the size after the last collection. But this doesn't free memory when the process doesn't allocate memory. The goal is to return unused memory back to the operating system so it's available for other purposes. (You allocate a ton of memory, calculate some result, write the result to a file, and drop references to the memory. When will it be freed?)


Seems like Go is more suitable for the “spin up, spin down, never let the GC run” kind of scenario that is being pushed by products like AWS Lambda and other function as a service frameworks.


Why do you think it is? Go has a really great gc which mostly runs in parallel to your program with gc stops only in the doman of less than milliseconds. Discord ran into a corner case where they did not create enough garbage to trigger gc cycles, but had a performance impact due to scheduled gc cycles for returning memory to the OS (which they wouldn't need to do either).


Because many services eventually become performance bottlenecked either via accumulation of users or accumulation of features. In either case eventually performance becomes very critical.


Sure, but that doesn't make Go unsuitable for those tasks on a fundamental basis. Go is very high performance. Whether Go or another language is the best match very much depends on the problem at hand and the especial requirements. Even in the described case they might have tweaked the GC to fit their bill.


GC pauses aside can Go match the performance of Rust when coded properly? Would sorting an array of structs in Go be in the same ballpark as sorting the same sized array of structures in Rust? I don't know a whole lot about how Go manages the heap under the covers.


Sure. That kind of code is very similar in both languages.

Rust will probably be faster because it benefits from optimizations in LLVM that Go likely doesn't have.

Go arrays of structs are contiguous, not indirect pointers.


Go always feels like an amateur language to me, I’ve given up on it. This feels right in line - similar to the hardcode GitHub magic.


I could be wrong, but I don't believe there is "hardcoded[d] GitHub magic".

IIRC I have used GitLab and Bitbucket and self-hosted Gitea instances the same exact way, and I'm fairly sure there was an hg repo in one of those. Don't recall doing anything out of the ordinary compared to how I would use a github URL.


There are a couple of hosting services hardcoded in Go. I believe it was about splitting the URL into the actual URL and the branch name.


https://github.com/golang/go/blob/e6ebbe0d20fe877b111cf4ccf8...

Ouch, Go never ceases to amaze. The Bitbucket case[0] is even more crazy, calling out to the Bitbucket API to figure out which VCS to use. It has a special case for private repositories, but seems to hard-code cloning over HTTPS.

If only we had some kind of universal way to identify resources, that told you how to access it...

[0]: https://github.com/golang/go/blob/e6ebbe0d20fe877b111cf4ccf8...


Thanks for the reference to prove me wrong.

Wow, that's sad. I'm glad it works seamlessly, don't get me wrong, but I was assuming I could chalk it up to defacto standards between the various vendors here.


This is in line with Go's philosophy, they try to keep the language as simple as possible.

Sometimes it means an easy thing in most other languages is difficult or tiresome to do in Go. Sometimes it means hard-coded values/decisions you can't change (only tabs anyone?).

But overall this makes for a language that's very easy to learn, where code from project to project and team to team is very similar and quick to understand.

Like anything, it all depends on your needs. We've found it suits ours quite well, and migrating from a Ruby code base has been a breath of fresh air for the team. But we don't have the same performance requirements as Discord.


"Simple" when used in programming, doesn't mean anything. So let's be clear here: what we mean is that compilation occurs in a single pass and the artifact of compilation is a single binary.

These are two things that make a lot of sense at Google if you read why they were done.

But unless you're working at Google, I struggle to guess why you would care about either of these things. The first requires sacrificing anything resembling a reasonable type system, and even with that sacrifice Go doesn't really deliver: are we really supposed to buy that "go generate" isn't a compilation step? The second is sort of nice, but not nice enough to be a factor in choosing a language.

The core language is currently small, but every language grows with time: even C with its slow-moving, change-averse standards body has grown over the years. Currently people are refreshed by the lack of horrible dependency trees in Go, but that's mostly because there aren't many libraries available for Go: that will also change with time (and you can just not import all of CPAN/PyPy/npm/etc. in any language, so Go isn't special anyway).

If you like Go for some aesthetic of "simplicity", then sure, I guess I can see how it has that. But if we're discussing pros and cons, aesthetics are pretty subjective and not really work talking about.


I don't agree with your definition of simplicity.

I like Go and I consider it a simple language because:

1. I can keep most of the language in my head and I don't hit productivity pauses where I have to look something up.

2. There is usually only one way to do things and I don't have to spend time deciding on the right way.

For me, these qualities make programming very enjoyable.


> I don't agree with your definition of simplicity.

You mean where I explicitly said that "simple" didn't mean anything, so we should talk about what we mean more concretely?

> 1. I can keep most of the language in my head and I don't hit productivity pauses where I have to look something up.

The core language is currently small, but every language grows with time: even C with its slow-moving, change-averse standards body has grown over the years.

> 2. There is usually only one way to do things and I don't have to spend time deciding on the right way.

Go supports functional programming and object-oriented programming, so pretty much anything you want to do has at least two ways to do it--it sounds like you just aren't familiar with the various ways.

The problem with having more than one way to do things isn't usually choosing which to use, by the way: the problem is when people use one of the many ways differently within the same codebase and it doesn't play nicely with the way things are done in the codebase.

This isn't really a criticism of Go, however: I can't think of a language that actually delivers on there being one right way to do things (most don't even make that promise--Python makes the promise but certainly doesn't deliver on it).


Does Go support functional programming? There's no support for map, filter, etc. It barely supports OOP too, with no real inheritance or generics.

I've been happy working with it for a year now, though I've had the chance to work with Kotlin and I have to say, it's very nice too, even if the parallelism isn't quite easy/ convenient to use.


It supports first-class functions, and it supports classes/objects. Sure, it doesn't include good tooling for either, but:

1. map/filter are 2 lines of code each. 2. Inheritance is part of mainstream OOP, but there are some less common languages that don't support inheritance in the way you're probably thinking (i.e. older versions of JavaScript before they caved and introduced two forms of inheritance). 3. Generics are more of a strong type thing than an OOP thing.


Offtopic but what are you missing when you have to use tabs instead of spaces? I can understand different indentation preferences but I can change the indentation width per tab in my editor. And then everyone can read the code with the indentation they prefer, while the file stays the same.


> everyone can read the code with the indentation they prefer, while the file stays the same.

Have you ever worked in a code base with many contributors that changed over the course of years? In my experience it always ends up a jumble where indentation is screwed up and no particular tab setting makes things right. I've worked on files where different lines in the same file might assume tab spacing of 2, 3, 4, or 8.

For example, say there is a function with a lot of parameters, so the argument list gets split across lines. The first line has, say, two tabs before the start of the function call. The continuation line ideally should be two tabs then a bunch of spaces to make the arguments line up with the arguments from the first line. But in practice people end up putting three or four tabs to make the 2nd line line up with the arguments of the first line. It looks great with whatever tab setting the person used at that moment, but then change tab spacing and it no longer is aligned.


On the good side, the problem of mixing tabs and spaces does normally not appear in Go sources, as gofmt always converts spaces to tabs, so there is no inconsistant indentation. Normally I prefer spaces to tabs because I dislike the mixing, but gofmt solves this nicely for me.


Please explain to me how this works for the case I outlineed, eg:

        some_function(arg1, arg2, arg3, arg4,
                      arg5, arg6);
For the sake of argument, say tabstop=4. If the first line starts with two tabs, will the second line also have two tabs and then a bunch of spaces, or will it start with five tabs and a couple spaces?


You wouldn't use an alignment-based style, but a block-based one instead:

  some_function(
      arg1,
      arg2,
      arg3,
      arg4,
      arg5,
      arg6,
  );
(I don't know what Go idiom says here, this is just a more general solution.)


Checking the original code on the playground, Go just reindents everything using one tab per level. So if the funcall is indented by 2 (tabs), the line-broken arguments are indented by 3 (not aligned with the open paren).

rustfmt looks to try and be "smarter" as it will move the argslist and add linebreaks to it go not go beyond whatever limit is configured on the playground, gofmt apparently doesn't insert breaks in arglists.


You should NOT do such alignment anyway, because if you rename "some_function" to "another_function", then you will lose your formatting.

Instead, format arguments in a separate block:

    some_function(
        arg1, arg2, arg3, arg4,
        arg5, arg6);
When arguments are aligned in a separate block, both spaces and tabs work fine.

My own preference is tabs, because of less visual noise in code diff [review].


In an ideal world, I'd think you would put a "tab stop" character before arg1, then a single tab on the following line, with the bonus benefit that the formatting would survive automatic name changes and not create an indent-change-only line in the diff. Trouble being that all IDEs would have to understand that character, and compilers would have to ignore it (hey, ASCII has form feed and vertical tab that could be repurposed...).


Or you could use regular tab stop characters to align parts of adjacent lines. That's the idea behind elastic tabstops: http://nickgravgaard.com/elastic-tabstops/

Not all editors, however, support this style of alignment, even if they support plugins (looks at vim and its forks).


> In my experience it always ends up a jumble where indentation is screwed up and no particular tab setting makes things right.

Consider linting tools in your build.


It's just an example of something that the Go team took a decision on, and won't allow you to change. I mean, even Python lets you choose. I don't really have a problem with it however, even if I do prefer spaces.


There's a difference between making decisions that are really open to bikeshedding, and making sweeping decisions in contexts that legitimately need per app tuning like immature GCs.

The Azul guys get to claim that you don't need to tune their gc, golang doesn't.


Hmm ..this is why Azul's install and configure guide run in hundreds of pages. All the advanced tuning, profiling and configuring OS commands, setting contingency memory pools are perhaps for GCs which Azul does not sell.


I mean, they'll let you because the kind of customers want to be able to are the kinds of customers that Azul targets. But everything I've heard from their engineers is that they've solved a lot of customer problems by resetting things to defaults and just letting it have a giant heap to play with.

Not sure how that makes the golang position any better.


Python chose spaces a la PEP8 by the way.


PEP8 isn't a language requirement, but a style guide. There are tools to enforce style on Python, but the language itself does not.


Same thing with Go... Tabs aren't enforced, but the out of the box formatter will use tabs. PyCharm will default to trying to follow PEP8, and GoLand will do the same, it will try to follow the gofmt standards.

See:

https://stackoverflow.com/questions/19094704/indentation-in-...


You can use tabs to indent your python code. Ok, you might be lynched if you share your code but as long as you don't mix tabs and spaces, it is fine.


Same with Go, you can use spaces.



I don't know about anyone else, but I like aligning certain things at half-indents (labels/cases half an indent back, so you can skim the silhouette of both the surrounding block and jump targets within it; braceless if/for bodies to emphasize their single-statement nature (that convention alone would have made "goto fail" blatantly obvious to human readers, though not helped the compiler); virtual blocks created by API structure (between glBegin() to glEnd() in the OpenGL 1.x days)).

Thing is, few if any IDEs support the concept, so if I want to have half-indents, I must use spaces. Unfortunately, these days that means giving up and using a more common indent style most of the time, as the extra bit of readability generally isn't worth losing automatic formatting or multi-line indent changes.


So you are the person that ruins it for everyone (are you emacs user by any chance?). Tabs are more versatile, you can even use proportional fonts with them. Projects end up using tabs because many people end up mixing them together (unknowingly or in your case knowingly using configuration that is unavailable in many IDEs).

BTW when you mix spaces with tabs you eliminate all benefits that tabs give (for example you no longer can dynamically change tab size without ruining formatting.


If I were an emacs user, I'd figure out how to write a plugin to display tab-indented code to my preferences.

No, I used to be a notepad user (on personal projects, not shared work) (you can kinda see it in the use of indentation to help convey information that IDEs would use font or colour to emphasize), and these days use tabs but longingly wish Eclipse, etc. had more options in their already-massive formatting configuration dialogues.


The reason I asked is that I believe this behavior is what Emacs does by default (actually don't know if by default, but saw this from code produced by Emacs users) e.g.

<tab>(inserts 4 spaces)<tab>(replaces 4 spaces into a tab that is 8 columns)<tab>(adds 4 spaces after the tab)<tab>(replaces with two tabs and so on)

Unless I misunderstood what formatting you were using.


You can use empty scope braces for this task in most languages. It's not a "half-indent" but it gives you the alignment and informs responsible variable usage.


Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: