Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a Clojure programmer, I don't care about any of the Java language features or improvements, but I'm super happy that I'm getting a state of the art JVM that is continuously developed, maintained, extended and optimized, over a time scale of decades.

This is incredibly useful: having a good VM to run your code in, with good modern garbage collectors, is not an obvious thing (as many other languages have learned).

This is not the LTS release, so I won't be switching to it, but I'm looking forward to the next LTS.



As Clojure programmer, i tell you that you should, because any moment Java introduce new features libraries will start using and most Clojure libraries are Java libraries wrappers.

In addition if you want to use these new Java libraries in case and Clojure does not catch up with new Java features the ergonomics of using Java libraries with clojure decrease.

And still they are trying to figure out how Ifn Clojure interface with Java functional interfaces.

Also when Project Loom lands on JVM it will benefit Clojure too, allowing to remove code for instance of Clojure futures.

If Clojure catch up with Value classes can increase performance of Clojure too.

But this disregard of Java features or improvements is the kind of Clojure developer so content of what he have that forgots that can get better things.


As another Clojure programmer, I say you should care about developments in Java. After all, the Java module system is precisely why classes became minefields with clojure.core/bean -- illegal reflective accesses and what not.

As someone else noted in this comment section, a lot of useful Clojure libraries are wrappers over Java libraries. So improvements to Java used in these libraries are good for you, too.


The Java module system is a problem on Java as well. They defined "module" in the narrowest terms possible (API visibility) without addressing any of the modularization problems Java developers have to deal with every day, so we end up with another layer on top of the layers of third-party dependency management systems, package repositories, and runtime class loaders that we are forced to use to have a semblance of a working build and deployment toolchain. They could have expanded upon this foundation and came up with the equivalent of Cargo or go modules, but instead they created this n+1 standard that 90% of developers either ignore or disable.

I'm sure it works for OpenJDK though.


> They defined "module" in the narrowest terms possible (API visibility) without addressing any of the modularization problems Java developers have to deal with every day

What other way would be possible, how would you solve it?


I second this, using jRuby


Any user of a guest language should care about what the "systems" language of the platform offers, if nothing else, to understand those stack traces, and FFI to platform libraries.


The same applies to Clojure CLR, right?


I haven't looked into CLR in a long time, but it feels like it has a fraction of JVM's adoption and community size. Microsoft also seems to be prioritizing Typescript and Node internally with its recent moves.


> Microsoft also seems to be prioritizing Typescript and Node internally with its recent moves.

You may want to look at Blazor Webassembly (https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...) and Blazor United (https://devblogs.microsoft.com/dotnet/asp-net-core-updates-i...)

They scrapped the old CLR and started over with a new "CLR Core".

Then they ported the new CLR Core along with the framework BCL to webassembly and has the CLR running in the browser, upon which they built Blazor Webassembly.

So you can now target the CLR and have your code run in the browser.

And now they are making progress on Blazor Unified where components can start of as server-side rendered exclusively and transparently and automatically move to webassembly rendering within the same application or page. It really is crazy stuff.

Typescript is not the only thing going on in MS Engineering.


To be exact, they have three active CLRs: CoreCLR, (Framework) CLR and MonoVM. WebAssembly, Android and iOS apps use the MonoVM because it is optimized on AOT.


At the risk of starting a flame war, the CLI and the coreCLR are fare superior VM and platform from a technical stand point. A lot of the features schedule for jave 21 / project Valhalla are just basically catching up to modern VM design. Of course there is more to the choice of a platform than just the technical differences.


There would be a world for this argument if the CLR had anything even remotely resembeling hotspot runtime optimization.

What you describe is the result of different philosophies/priorities. CLR focuses on static compile-time optimization, while the JVM is a highly dynamic construct with unmatched runtime analysis. In the 90s, there was a hope that with sufficient escape analysis, the need for user-defined primitives would vanish, which is why value types have not been done prior.

By themselves, accessing values via stack and not by reference is technically trivial. The problem lies in backporting that kind of stuff.


> What you describe is the result of different philosophies/priorities

Maybe, it might also be the results of bad design decisions.

> highly dynamic construct with unmatched runtime analysis

As someone who spend quite a bit of time working on custom optimization around hotspot, i failed to see how anyone can describe the current state of the JVM ( J9 is a bit better) as unmatched. V8 and some some extend Julialang have much strong dynamic analysis.

> CLR focuses on static compile-time optimization

Sources ?


> Maybe, it might also be the results of bad design decisions.

Most assuredly not. Back in the 90s, the cost of loading memory and performing a CPU instruction was essentially equal. Today, fetching data from RAM takes 100x longer than a CPU instruction. This makes locality of data absolutely crucial and is a consequence of computing throughput increasing, but latency remaining stagnant (think of it like a database transaction).

With focus on garbage collection, it made sense to throw everything into the heap and use runtime analysis to inline as much as possible. Nobody has forseen how such hardware fundamentals would change over the next decades.

As far as I'm concerned, the proposed JVM spec for value types is the most promising model I have seen anywhere. Instead of a binary choice between entities in the heap and values on the stack, you have a more granular control with incremental benefits and constraints.

> As someone who spend quite a bit of time working on custom optimization around hotspot, i failed to see how anyone can describe the current state of the JVM ( J9 is a bit better) as unmatched. V8 and some some extend Julialang have much strong dynamic analysis.

Didn't know that, probably worth looking into. Though, I wonder how much you can compare V8 and the JVM, given the fundamental difference between a static and dynamic language.

> CLR focuses on static compile-time optimization

I thought that was common knowledge. I mean, does the latest CLR perform any significant amount of runtime optimizations? From what I've read the CLR makes sue of cpu-specific instructions such as SIMD, but no cache/layout optimizations or any inlining during runtime.

https://learn.microsoft.com/en-us/archive/blogs/davidnotario...


The Java JVM was originally designed with a very dynamic language in mind, e.g. Java's support for dynamic loading, dynamic binding, reflection etc. The influences at the time were Smalltalk and ObjectiveC. As Java has evolved to be a much more statically typed language (especially Java 5), the JVM has somewhat struggled to exploit this while maintaining backwards compatibility. It's nowhere near as bad as the Python situation though. I believe the CLR was built with things like parametric specialisation in mind from the beginning.


I agree with all of this. I am not sure if you meant this as a opposition to my comment.


Not opposition, I was just trying to adding some historical context to your comment. I agree the JVM is poorly suited for modern Java and Oracle know this too, hence they are developing GraalVM.


Im what aspect is the VM “far superior”? Where are CLR’s state-of-the-art GC or low-level GC? Or does it have observability tools like JFR?


I do not use C# (in fact, I do my best to avoid Microsoft products). But I can tell you that there are obvious workarounds C# employs to work around its lackluster GC. Stack allocation and spans immediately come to mind.

As far as I know, Java offers no way to mark objects as stack-allocated, but C# does. Spans in C# allow programmers to produce subarrays without copying. Enums in C# are stack-allocated, unlike Java. So on, so forth. None of this is a huge deal for Java since some of the best GCs in the world are implemented atop the JVM. But I do think C# offers its workarounds when GC performance gets in the way.


But these are all tredeoffs, that make C# almost as complex as C++. Sure, there are cases where these low-level optimizations allow for better performance, but don’t forget that the more things we specify, the less freedom does the runtime have. SQL is a good example here, it specifies the what and not the how, and this makes a good db very hard to beat on complex queries.

The way I have seen it described somewhere: C# has a slightly higher performance ceiling, but a naive application may very well run faster in Java.


In Java, primitives go on the stack. Escape analysis helps in other ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: