Java is also making good progress on low latency GC.
Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.
I don't want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.
Yet we still read articles and threads about how bad the Go GC is and the tradeoffs that it forces upon you.
I get the feeling that the industry is finally starting to realize that GC has been a massive mistake.
Memory management is a very important part of an application, if you outsource that to a GC you stop to think about it.
And if you don't think about memory management you are guaranteed to end up with a slow and bloated app. And that is even before considering the performance impact of the GC!
The big hinderence has been that ditching the GC often meant that you had to be using an old an unsafe language.
Now we have rust, which is great! But we need more.
The Go GC isn't that great, it's true. It sacrifices huge amounts of throughput to get low latency: basically a marketing optimised collector.
The new JVM GCs (ZGC and Shenandoah) are more sensibly designed. They sacrifice a bit of throughput, but not much, and you get pauseless GC. It still makes sense to select a throughput oriented collector if your job is a batch job as it'll go faster but something like ZGC isn't a bad default.
GC is sufficiently powerful these days that it doesn't make sense to force developers to think about memory management for the vast bulk of apps. And definitely not Rust! That's one reason web apps beat desktop apps to begin with - web apps were from the start mostly written in [pseudo] GCd languages like Perl, Python, Java, etc.
I don’t think it’s fair to call garbage collection a mistake. Sure, it has properties that make it ill-suited for certain applications, but it is convenient and well suited for many others.
Same applies with manually memory management, you get instead slower allocators unless you replace the standard library with something else, and the joy of tracking down double frees and memory leaks.
I'm using Rust, so no double frees and no accidental forgetting to call free(). Of course you can still have memory leaks, but that's true in GC languages too.
That is not manually memory management though, and it also comes with its own set of issues, like everyone that was tried to write GUIs or games in Rust is painfully aware of.
That's true. The comment by mlwiese up-thread, that I responded to, praised Go's low GC latency without mentioning the heavy memory and throughput overheads that come with it. I felt it worth pointing out the lack of a free lunch there; I think a lot of casual Go observers and users aren't aware of it.
Agreed, although if Go had proper support for explicit value types (instead of relying in escape analysis) and generics, like e.g. D, Nim, that could be improved.
I don't think that's as hard as you make it out to be. Notably, Zig does not have a default allocator and its standard library is written accordingly, making it trivial to ensure the use of the appropriate allocation strategy for any given task, including using a debug allocator that tracks double-free and memory leaks.
No, and as far as I am aware it makes no attempt to do so other than some allocators overwriting freed memory with a known signature in debug modes so the problem is more obvious.
The language Go has sub millisecond GC with multi-GB heaps since 2018. See https://blog.golang.org/ismmkeynote
Java is also making good progress on low latency GC.
Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.
I don't want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.