JavaScript is probably like 10x or 100x slower than C or similar languages when you write something bigger and any of the following happens:
- the ratio of code size to run time is too big. Then the jit can’t keep up.
- you use a lot of value types. Them be structs in C, no need for allocation. In JS them be objects. The JS VM will try to escape analyze them, and it will succeed some of the time, but fail enough of the time to cause massive punishment.
- you churn objects while having a large base heap size and the generational hypothesis holds only a bit. Then you’ll wait for the GC a lot.
- you have a class hierarchy with many descendants and you often access properties or call methods on the base type. Then vtables or whatever work great but JS inline caches blow up.
- probably lots of other conditions.
Write enough code and at least one of these will hold and you’ll be slow in JS, fast in C.
(Source: I work on JSC and I implemented a lot of its optimizations. Benchmarks like the ones in this post are the sort of thing my JITs eat for breakfast. It’s cool to see people throwing me softballs but I like to be honest about what the technology I work on is capable of.)
I've been thinking a lot about this over the years. Having been a dyed-in-the-wool Java VM person for years, and then also working on JS, I started to see the marketing speak I blabbed for years just kind of disappear in the face of huge applications.
JavaScript, Java, C#, generally all JITed languages will eventually end up exhausting the size of their JIT code caches. You just gotta hope that your hot paths are contained in there and that the JIT balances its hunger for deep inlining with the need to not overcompile. Because there's nothing worse for a JIT when the entire application is just a tepid soup and there are no hotspots. Then you aren't getting escape analysis, you end up with tons of polymorphism, and generally, performance suffers.
Despite 20 years working on JITs and dynamic optimization, I am more convinced than ever you just can't beat static compilation and programs designed to not over-allocate, to not overabstract with too much polymorphism, closures, and heavy allocation. Programs still need to be a bit miserly to get maximum performance.
With Java there's some relation to the native code, so if you reach that point you can use something like GraalVM to get better performance. In fact there's quite a lot of people who check that with large apps, for most cases JITs perform better in runtime and AoT performs better on startup. But for huge apps this might flip.
But the thing is that Java is pretty close to native. Primitive types, object allocation, threads etc. all correlate almost directly to how CPUs and OSs work. So you get a sense of performance and what's expensive/isn't.
With JavaScript there are so many complex abstractions and constructs it's really hard to get a sense of what the final ASM will look like in an optimal case. Unfortunately, some Java changes (e.g. Valhalla) aim to bring some of that lack of clarity into Java. Still, thanks to typing and a simpler syntax Java is MUCH better positioned here than JavaScript.
I agree that with Java you can get close to native performance if you are careful. (I wrote a MCU emulator that beat its C competitor back in 2005, and that's all low-level hackery). But in practice people write a ton of abstraction in Java and programs are quite trashy (i.e. allocate a lot of intermediate objects that are quickly garbage). The APIs and best practices encourage a lot of overabstraction, IMHO, and that leads to big, lumbering programs that struggle to get back to reasonable performance levels with the prodigious efforts of VMs.
I'd like to add that this argument exists for pretty much all "slow" languages and that "if you're careful" often can mean "throw away all upsides your language provides" [1].
Personally, I find these discussions tiring because the ones arguing such points are usually the kind of person that doggedly clings to the "wrong tool for the job".
For example: Although I see rust as the better alternative in many places, I personally find C++ to still be a better language than Java. That doesn't mean I'll insist that Java has no worth or that every program should be written in lower level languages. Horses for courses.
Why can't we just go the Python way and accept the shortcomings of the languages we hold dear?
[1]: Edit: For GCed languages, this often is "avoid the GC at all costs and essentially write less readable C".
What do you mean by exhausting jit caches? Do other VMs limit their size somehow? JSC basically doesn’t.
That doesn’t solve the problem of course. I have this metric that I use to philosophize about this: SIPS, or static instructions per second. If your program has low SIPS (I.e. it’s the benchmark of HotSpot’s wet dreams or OP’s post) it means that the total code is small and it runs a long time - so the JIT will go on a rampage early on and then you’re good. If the program has high SIPS, then the JIT will still be catching up when the program terminates. Most real things that aren’t web servers have low SIPS.
Reflecting more on this. AFAIR Hotspot does have a hard limit on the amount of machine code it will generate. V8 has a hard limit on the heap size, and code pages are part of that. But both of those are pretty big numbers, so if you are hitting that you are probably huge already. But generally there are counters in the metadata for functions or classes, depending on the system, that limit how many times a given unit might be recompiled, typically to limit the damage deopt loops can do. So programs end up gradually leveling off as they hit those limits.
I haven't really paid attention to V8's policies for years, but generally speaking, it still does this to some extent.
I think of SIPS as basically the resident set size for code. Depending on how the system ages your code, you might be stuck with unused JIT code until it can be recycled, if at all. V8 used to have code aging and the implementation was a huge bug farm. I am not sure if that survived the switch to TurboFan/Ignition. Of course, V8 GC's code about as aggressively as it compacts the old generation.
JSC limits recompilation only in the sense that we have exponential back off on each jettison. But that means that a sufficiently hot function will keep getting recompiled forever even if speculation is permabroken in that function.
Other than that JSC has only very weak heuristics for aging out code. And on sufficiently empowered devices our JIT pool is hundreds of megs.
So, we will happily compile your whole program with the jit if your whole program is warm and runs long enough.
To what extent writing your code in WebAssembly (eg Rust) can help with those points (eg structs in C argument). It would still run in a JS VM so I'm guessing a bit, but not to the full extent?
WebAssembly helps a lot, but doesn’t solve the jit issue.
WebAssembly also introduces its own issues since it’s a BYORT system (bring your own runtime). So, there’s more to JIT (your language’s whole runtime) and more to hold in memory (your language’s whole runtime). You might say, “but pizlonator, every language has a runtime”, to which I’d say: yes but usually that shit gets shared by every running instance somehow. In WebAssembly every instance pays for its runtime’s memory footprint and it’s quite likely that every instance has a different runtime so the code isn’t shared either. Basically, WebAssembly is knee-capped on memory footprint by design. JS isn’t.
Your comment seems substantially misinformed or inapplicable.
WebAssembly has no truck with JIT. Just-In-Time compilation is all about not performing optimisation ahead of time, but only doing it when the code in question is used, and guiding the optimisation by how the code is being used. In WebAssembly, the conversion from the binary .wasm format to machine code is a comparatively lightweight process with no real optimisation of this form: rather, such optimisations must be done as part of producing that .wasm blob (in regular compilation possibly with profile-guided optimisation to go even further than JIT can).
So instead, when you’re using something like Rust (as distinct from, say, compiling CPython or V8 to WASM), what you’ve got is a fairly small amount of what you’d probably call runtime code (allocator, panic mechanism, str, Vec<_>, some other parts of the std crate), probably something like 25KB (or with a little care and compromise, more like 5KB) except when Unicode tables are required, compiled from WASM byte code to similarly-sized (though I don’t know the real ratio) machine code faster than it can be downloaded. That’s code memory usage; for the data memory usage, well, your Rust/WASM will normally blow your JavaScript out of the water there with much more efficient packing of data into memory, even if you’ve got a fair bit of overallocation.
The fact of the matter is that the runtime parts which can be shared for JavaScript are actually not all that large, and routinely dwarfed by included libraries (React, &c.). WebAssembly is by no means knee-capped on memory footprint; rather, so long as you’re using it in a sane way (Rust, standard WASM optimisation and code-shrinking techniques, that kind of thing), there’s a pebble in the way that you’ll notice if you’re a beetle, but if you’re even rabbit-sized you probably won’t even notice it.
As regards pbadenski’s comment: WebAssembly does not run in the JavaScript virtual machine, it’s a completely separate thing that is merely able to call and be called from JavaScript via a foreign function interface. The backing memory buffer may also be accessed from JavaScript, but that’s immaterial in the consideration.
The wasm blob has to undergo the hardest parts of optimization to get native code from it. You need to select instructions and allocate registers. Those things are time consuming; they are on the same order of magnitude of time consumption than most full compiler pipelines.
Compiling wasm to native is only faster than downloading if you compile without optimization. That’s common in wasm VMs but then there’s an optimizing JIT that runs adaptively later, just like a JS or Java VM would do.
The fact that JS VMs share the ICU implementation between instances is a huge deal. That’s not the only thing that gets shared. Also, it’s not about just sharing code; it’s about sharing memory for the runtime’s state and for allowing elastic reuse of space for objects. In wasm the sharing is page granularity at best.
JavaScript is still really slow compared to compiled languages - C, C++, even Java. That super-simple benchmark even after JIT is still 2-3x as slow as the C version, and more complex code can’t be optimized nearly as well.
I can confidently say that games and apps in the browser and Electron are noticeably slower than other apps. You don’t see many web games because the graphics required for games today can’t really be rendered on the web in 20+ FPS, at least until WebGL2/WASM/WebGPU get better support.
But JavaScript is fast enough. Because the vast majority of programs, especially websites, don’t need really fast code like C. They don’t need to redraw every frame, don’t need 3D capabilities, don’t need to perform expensive computations, etc.. If you need a fast program that does those things 99.9% of the time you can just make it a real app, or you can write the fast parts in WebAssembly.
At the end of the day computers are very fast, so a program could be written in any language as long as it isn’t doing anything super performance-needing and the compiler/interpreter has has half-decent optimization.
This line of reasoning works adequately when looking at a single program, but when you look at a larger group of programs, performance still matters. The more performant each individual program, the more programs can be run simultaneously.
This helps everywhere: mobile (better battery life, smarter apps), desktop (more apps open without swapping), server (more compute in the same footprint), and embedded (smarter devices, less power, cheaper hardware).
I’ve also seen plenty of web apps which load slowly because they load too much javascript. Most of it unused. I was asked to help out with one project my coworkers were working on where the bundler was pulling in multiple copies of momentjs, each complete with its own copy of the global time zone database. The page loaded way faster once we cleared that out.
A few years ago I noticed most websites just seemed slower than they should on my computer. The culprit turned out to be metamask (an etherium wallet extension). It was adding 700 kilobytes of javascript to each page load thanks to web3. It also exposed my etherium address to any website that asked.
The JS was cached, but just parsing that much code added noticeable lag to every page load. Everything felt zippy once I uninstalled that.
Loading assets != running JavaScript, which is what we're talking about here. It's also a difference between startup slowness and long-running slowness. And Electron, for example, would (normally) only ever be "loading" JavaScript from disk, like any other local app would.
In my experience, the great majority of perceptible slowness comes from badly-written JavaScript and running too much code, generally not even code that touches the DOM, though certainly touching the DOM too much in bad ways (and structuring the DOM in bad ways) often contributes to it.
Certainly the web as a platform is slower than a precisely-built thing honed for speed for a particular use case, but speaking generally the web is only perceptibly slow when you write bad code (or are waiting for the network). Which is admittedly rather common.
Electron apps are memory heavy and slow at interaction because Chromium loads a ton of rendering and Web API code into memory, and interactions triggered by keyboard/mouse/touch input go through layers of non native handlers so that event listeners behave uniformly across devices. It's all part of a complex rendering loop, and whether an element has a fixed position during scroll can affect memory and performance by hundreds of times compared to those 2x js-vs-native you mention.
Oh, and the Web API has has stuff you wouldn't expect like USB, Bluetooth, MIDI, vibration, TPM etc.
> You don’t see many web games because the graphics required for games today can’t really be rendered on the web in 20+ FPS
Nonsense. There's nothing wrong with the graphics you can run on the web. Web games flourished before the iPhone. And it seems safe to assume that they died because of the iPhone.
It was a horrific development in the space, since mobile games differ from flash games in being (1) more expensive and (2) worse.
It's fast now because a bunch of mega-corps (Google, Apple, Microsoft before dropping the ball) put a huge investment in making JavaScript interpreters fast.
mozilla gets no credit at all? According to some old zdnet benchmarks I could google up[1], their IE7's sunspider was result was 22678ms, Firefox 2's result was 12460ms, Firefox 3 RC1 2377ms. So mozilla managed to make their Firefox 3 js engine about 10 times faster than IE7 in sunspider, more than 5 times faster than Firefox 2. Safari at the time was pretty much on par with Firefox 3. That's before any public Google Chrome release (their first release was a few months after Firefox 3 stable). After the Chrome release, js perf mostly became an arms race between Google and mozilla with the other vendors trying to not be left too far behind in the dust.
Of course, in current gen browsers and on current gen hardware, sunspider is so fast it's more noise than measurement.
Mozilla is funded at 90+% by Google, whatever they spend it's mostly Google's money originally so it doesn't make sense to count them as a separate entity investing on that front.
Well, my problem with the testing process that goes into many programs is that they only test with decent hardware. What about those who do not have the latest, state of the art computer or smartphone?!
The given C program has no input or output and exhibits undefined behaviour (signed overflow). Even with the most rudimentary of optimizations, this program will execute in constant time. https://godbolt.org/z/YdshfqdWs
It was never slow. The event loop is very performant compared to a language with a GIL and a culture of synchronous IO (looking at you, python and ruby), and V8 is amazing.
I notice that the author doesn't give any links to claims that JavaScript is slow, and the first two pages of search results are either discussions of how fast JS is or guides to JS performance. Who exactly was lying?
> It was never slow. The event loop is very performant compared to a language with a GIL and a culture of synchronous IO (looking at you, python and ruby), and V8 is amazing.
Having written a lot of code in both Python and JavaScript I disagree here. It's not really the GIL slows down Python compared to JavaScript, that's more down to the lack of a comparable JIT. Yes there's PyPy but it has received a fraction of the investment of V8 and Python is a more complex language to create a good JIT for. Support for multithreading (albeit without parallelism) may be one of a number of factors those complicating factors.
JavaScript doesn't need a GIL because it doesn't support multithreading. WebWorkers are more akin to Python's multiprocessing since objects are not shared between threads.
You have to be careful doing anything compute intensive (e.g. JSON.parse of a large blob of JSON) when using cooperative (async) rather than preemptive (threads) concurrency since you may inadvertently block the event loop, substantially increasing latency for other tasks. Handing off such tasks to a thread pool is harder in JS than Python since you can't easily deferToThread as you can in an async Python program.
Cooperative multitasking has its advantages too of course. An event loop provides a clear boundary for efficiently batching calls which can be awkward with synchronous code. Folks have been writing network servers and in Python using asynchronous techniques for decades now. Twisted is almost 20 years at this point and asyncore was added to the standard library even earlier.
Oh yeah, sorry, I should have said nodejs (which has been V8 from day one) was never slow. Some of the older browser implementations were shocking.
I should also add a massive asterisk to "never slow", clarifying that it's usually within a couple of orders of magnitude of compiled code, and faster than that for glue language tasks that let it call native code for the heavy lifting.
Never? I clearly remember back in the mid to late 90s, you could easily bring the browser to its knees with a simple JS loop. It wasn’t until Google came around and needed fast JS for Gmail, google docs, maps, etc. that JS became fast as they poured resources into that space and everyone else played catch up.
The second code comparison is supposed to point out that the second version. Unfortunately, still not easy enough to read to avoid bugs caused by javascript's "unique" approach.
`findSmallestPositiveValue([2,11])` gives the wrong value for the supposedly better function.
> Even if the second function takes twice as long as the first, we are in the realm of nanoseconds.
How can you possibly know this? You can make either function take arbitrarily long by increasing the size of the input. The second one scales worse. I see no reason to assume an upper bound on the input size given. If you want the behavior to be obvious at a glance, just name the function what it does. Just like it already is. Or leave a block comment.
The argument here is that javascript is so fast that it doesn't matter what you write because you don't have to worry about that. I just don't know how to reconcile that with the fact that I regularly see websites that have noticeably slow javascript "startup" times.
These probably have nothing to do with the speed of javascript as-is, but the whole stack of data and layout of a page that needs to be either loaded from some database and then computed how to layout in a reactive manner.
Javascript is very fast, but what is not fast is the frameworks that use it or the ways people use those frameworks. If people with similarly minded thinking as those who develop desktop only apps with native languages, would write javascript only apps, I would say the performance would be very close.
But many developers have no understanding of how to write performant code, as they have used to just using some heavy frameworks that handle all that stuff for you, so easily you become limited by that thinking.
That is at least my view into this world, where naive web-developers who have like 2 years of software development can get to deploy stuff to production. Iteration speeds need to be fast, so a lot more people are employed without the skills to really understand what is happening, thus resulting in poorly executed services. Or maybe it's just not a priority.
Also, many technically inclined people just don't get it, that what is important is that you can execute a function, the speed is not really so important in the end, even though we would like it to be. We tend to live in a bubble, those who have dwelled deeper into the operating ways of computers, that we think that everybody else is like that too, or should be.
Many are just doing their jobs, and that does not include learning how a CPU or memory works, but it might be limited to learning how a Vue.js framework works or how to use React.
> You can make either function take arbitrarily long by increasing the size of the input. The second one scales worse.
I think this is the point, the second one scales worse - absolutely (ignoring the bug you mentioned) however it really is more readable, and because V8 is so fast, why not use the second version?
Block comments and function names are important, but if someone needs to modify the code to add or change functionality nothing beats simple & readable code.
> websites that have noticeably slow javascript "startup" times.
Doesn't that just mean it can take a long time for the browser-application to download all of its scripts?
That can certainly take a long time but it depends on
1) The speed of the web-server(s) serving those scripts
2) The speed of the network you are connected to
So web-sites taking a long time to execute their JavaScript on your PC in the browser does not say much about the speed of JavaScript the language.
But it is a practical concern of course. Which brings to my mind the fact that JavaScript programs can exhibit extreme parallelism, because they can execute in millions of browsers at the same time, think SETI-search, or unauthorized crypto-mining.
So in practice JavaScript can have a huge computational throughput, in other words great speed, because it can execute on multiple clients at the same time, easily.
JS startup time also includes parsing the code and (initially) running in a very low-optimization mode, usually a bytecode interpreter, while the VM gathers profiling data in preparation for later JIT compilation of hot spots. This can be a serious hit to page load times even when all scripts are cached.
> I regularly see websites that have noticeably slow javascript "startup" times.
True, same as any language or applications fast CPUs shouldn't be a free chance to completely ignore writing efficient code. Although I bet those websites are slow because they are doing silly things like adding 1000 items to the DOM one by one rather than being slow because someone didn't optimize their use of .forEach()
Some anecdata: Node was roughly as fast as dmd, the D compiler for fast iteration. The LLVM and GCC based D compilers are hugely faster.
By using JavaScript you are throwing away something like 2-3x.
There will be cases where JS can keep up because it reduces to a local loop which gets optimized the same way as any other language, but all those little other bits add up.
Same with Python. If you do everything in numpy you can be very fast, but what happens if you don't? You could use a fancy jit but now you have a deeper stack with no control over the emitted code - nice if you already have the code, but not ideal.
I don't know that I agree with this article, while there is absolutely a problem with package/framework/build tools churn that makes development super painful, I don't feel that developing with JavaScript has been any more painful than developing with VB, C#, PHP etc was in the past.
All languages have their pain points, and modern JS/TS, while undisputably having their quirks, aren't particularly more quirky than JS (IMHO)
I recently got back to a web app with node I'd been working on two years ago, and I'm finding packages deprecated, command-line arguments that no longer work, etc. Lots of breaking changes all the time. There's definitely something worse than average with the node development experience. Things are quickly changing and at the same time very poorly documented.
I took a break from my 3rd attempt at learning Node (which I have since abandoned permanently) to learn C and write my own software 3D renderer in it. I found this, to my surprise, a vastly simpler and more pleasant experience.
Well, why did you choose packages that change quickly and are poorly documented then? There are packages that, OTOH, have up to zero dependencies, and haven't changed in years; if you consider those stale, I guess nobody can help you.
I chose the most basic and common packages I could find (terser and rollup), and the changing, no longer valid command-line options I was talking about were command-line options of the npm package manager.
JavaScript has incredibly good async APIs (promises, async/await, and the seamless integration between the two).
As a result, I'm convinced that software written in JS is much more likely to actually perform long-running (IO-bound) tasks in the background/in parallel, simply because it isn't a huge pain to do it.
The actual execution speed matters a lot less at that point.
You would think that... But unfortunately this isn't the case. You end up running into the global lock problem where something is stuck deep inside NodeJS and you're at 30% CPU utilization with no idea why. Because everything is async you have no way of understanding what the hell is broken without going into the NodeJS source code and debugging that.
Profiling NodeJS is futile because of its async nature. You end up with a lot of noise and no substance. I'm looking forward to project Loom which would bring Java threads into hybrid green/native mode. That would deliver throughput as fast as NodeJS but with the performance of Java and clarity of simpler stack traces.
Right, async processing performance is hard to predict, but still it helps to be able to use async/await.
This brings to my mind the current logistics crisis in the global trade. Lots of parallel tasks and traffic jams and all of a sudden stores don't have stuff and prices are going up and we don't know when will it be back to normal. That is what Node.js can feel like, because it is hard to understand what tasks execute when and who is waiting for what.
It doesn't eliminate loops, but it almost certainly is an empty loop after the full pipeline of optimizations and scheduling. I'm guessing the C version is unrolled or even completely eliminated, because LLVM will definitely eliminate empty loops.
It feels like it ought to, but I did some light testing in the browser and it seems that V8 never optimizes it into an empty loop, even after it's hot code for a long time. I know turbofan does LICM, but I don't think it knows how to catch cases like this.
I've run the C example code through Godbolt with -O2 and... it removed all the loops because the result wasn't used.
Then I added a printf statement and the algorithm came out without too many strangeness.
When I reduce the amount of iterations, the compiler inserts a constant where the calculation would've taken place. The amount of iterations in the benchmark seems to be too high for the compiler to evaluate and optimize out. The clang seems to stop any full evaluation at exactly 101 iterations, meaning there's probably a default limit somewhere puts the limit on a nice, round 100.
I've taken the code, increased the amount of loops by a factor of 10 (to make the differences more pronounced), added a printf/console.log to make sure the loops themselves don't get thrown away (clang does that with -O2) and on my laptop (i7-10750H) the C code, compiled with clang 12, runs 10000000000 iterations in 9.462s whereas the same number of iterations in Node 16 runs in 19.990s. That's more than a 2x execution duration, with more than a 100% speed difference.
For comparison: Java runs in about 9.846s, from the command line (java code.java, no compilation step). C# (dotnet 5) runs in about 9.736s after compilation; compilation takes about a second. Rust runs in about 9.435s after about 0.47s of compilation. Kotlin runs in 9.640s, but it took a few seconds to be compiled into a JAR first. Python 3.9.7 takes forever, but I think that's because its arbitrary length number implementation is trying to make it output the correct result instead of faking it like the other programs are doing. PHP 8 also didn't really stand a chance at over 90 seconds.
JS would probably win in a huge code base consisting of mostly dead code (node_modules, anyone?) where compilation would get in the way of quick edit-run-verify loops, but only starting from scratch without a compiler cache set up. From what I can tell, JS certainly isn't _slow_ like PHP, but it isn't _fast_ either. It's somewhere in between the old interpreters of old and compiled/runtime-JIT'ed languages.
If anything, this algorithm benchmark would indicate that you're probably better off running C#/Java rather than C/Rust because of the negligible performance difference with the huge benefits of language safety with no effort. This benchmark is far from normal program code, of course, so it doesn't really prove anything.
It should be noted that NodeJS outputs the result as "Infinity" whereas C and other languages creates a value that's clearly been bit-wrapped and overflown quite a bit. This can be an advantage (because Infinity + anything = Infinity) or a disadvantage (float math) but that's just how the language works.
Edit: looking at the rest of the code, I've also benchmarked the loop vs functional approach (Node 16, same device).
// Generate numbers
numbers = [];
for (let i = -10000000; i < 10000000; i++) numbers.push(i);
lowest = numbers.filter(x => x > 0).sort((a,b) =>a-b)[0];
console.log(lowest);
^ this runs in 1.007s
numbers = [];
for (let i = -10000000; i < 10000000; i++) numbers.push(i);
let lowest = Infinity;
for (const i of numbers) {
if (i > 0 && i < lowest) {
lowest = i;
}
}
console.log(lowest);
^ this runs in about 0.629s
I wouldn't call a near 40% speed difference "only 2ns". The functional approach is very comfortable to program in, but it comes at a real cost and should definitely be avoided complex in algorithms.
Java handles streams very poorly. However, LINQ is quite fast:
var numbers = new long[20000000];
for (var i = -10000000L; i < 10000000L; i++) numbers[i + 10000000L] = i;
Console.WriteLine("{0}",
numbers.Where(x => x > 0).OrderBy(i => i).First()
);
^ this runs in 0.346 seconds.
That doesn't make it quite optimal, though:
var numbers = new long[20000000];
for (var i = -10000000L; i < 10000000L; i++) numbers[i + 10000000L] = i;
long lowest = 9999999999L;
foreach (var i in numbers) {
if (i > 0 && i < lowest)
lowest = i;
}
Console.WriteLine("{0}", lowest);
^ This runs in 0.151s
These examples aren't very conclusive either. Rust's loop version runs in 0.100s and a shitty iterator version that collects the entire iterator and sorts it runs in 0.130ms. Again, the real fight here seems to be between C# and something closer to the metal.
That's the most ridiculous benchmark I've ever seen. Real world workloads are absolutely nothing like the example in this blog post.
Also, why ignore memory usage and JS VM startup time? Why ignore the massive dependency tree that comes with JS frameworks? There's multiple factors to consider when assessing language performance. JS is far less efficient than other languages...but that's honestly fine. There are plenty of use cases for the language where your application doesn't have to be incredibly performant.
There are many things to consider in addition to just JS, like bloated frameworks, dependency hell, startup times... yeah a single benchmark may not show much difference, but when you consider everything else there is a noticeable difference between a JavaScript CLI app and Go/C CLI app.. I can visibly even see a difference for CLI, but my process manager clearly shows any desktop GUI use for JavaScript is severely less performant than most native counterparts.
Stop writing C code like this author does! C is a language for experts and there are lots of things that are wrong here. Especially when writing benchmark code, when you do want the compiler to optimize.
> int main()
The easiest way to find someone who's inexperienced in C is to find someone who declares a function that takes no arguments with an empty pair of parentheses. In C but not C++ you need to write "void" inside the parentheses.
> myNum *= i;
Signed integer overflow is undefined behavior. You are lucky if the compiler didn't just replace the whole thing with __builtin_unreachable().
Next the function doesn't use the computed variable. A compiler can just optimize the whole thing out.
The point I'm trying to make is that C is a difficult language to write; especially so when you want to benchmark.
> The easiest way to find someone who's inexperienced in C is to find someone who declares a function that takes no arguments with an empty pair of parentheses. In C but not C++ you need to write "void" inside the parentheses.
C11§6.7.6.3p14:
> An identifier list declares only the identifiers of the parameters of the function. An empty list in a function declarator that is part of a definition of that function specifies that the function has no parameters. The empty list in a function declarator that is not part of a definition of that function specifies that no information about the number or types of the parameters is supplied.
That declarator is part of a definition. Even if it were not, the elision of a function's formal parameter list does not matter very much if, like 'main', that function is usually not explicitly called.
The compiler used, Clang 13.0.0, does indeed optimize out everything except for the "return 0;" at the end unless you compile with -O0 to disable optimization. I'm not sure whether the author benchmarked with -O0 or benchmarked a no-op, but either one is probably a mistake.
This article is really a reiteration of a https://wiki.c2.com/?SufficientlySmartCompiler argument. I would've sticked to a massively less appealing, but more closer to reality: "We've been lied to: JavaScript is pretty fast, but..." argument.
JavaScript is probably like 10x or 100x slower than C or similar languages when you write something bigger and any of the following happens:
- the ratio of code size to run time is too big. Then the jit can’t keep up.
- you use a lot of value types. Them be structs in C, no need for allocation. In JS them be objects. The JS VM will try to escape analyze them, and it will succeed some of the time, but fail enough of the time to cause massive punishment.
- you churn objects while having a large base heap size and the generational hypothesis holds only a bit. Then you’ll wait for the GC a lot.
- you have a class hierarchy with many descendants and you often access properties or call methods on the base type. Then vtables or whatever work great but JS inline caches blow up.
- probably lots of other conditions.
Write enough code and at least one of these will hold and you’ll be slow in JS, fast in C.
(Source: I work on JSC and I implemented a lot of its optimizations. Benchmarks like the ones in this post are the sort of thing my JITs eat for breakfast. It’s cool to see people throwing me softballs but I like to be honest about what the technology I work on is capable of.)