.Net core 3.0 has come up with a huge surprise for me, Tiered Compilation (it exists since 2.1, but now it's enabled by default). Years ago I had written about how odd it seemed to me that contrary to Java HotSpot and to all modern JavaScript engines (that initially either interpret or JIT compile a method with a fast, lower-quality Jitter and then, if considered appropriate, will replace it, hot-swap, with a optimised Jitted version) .Net continued to use a single pass JIT compiler. Now things have changed.
Now .Net core comes with a fast, lower quality JIT compiler (tier-0) and with a slower, higher quality JIT compiler (tier-1). At runtime, methods are initially compiled to Native code with tier-0, and if they are executed more than 30 times, they'll get compiled again (by tier-1) and hot-swapped. This article provides some useful diagrams. Long in short, calls to compiled methods have now an additional level of indirection, where a count and jump to "initial, non-optimised native code" is done until the counter threshold is reached, then the "count and jump" is replace by a "jump to the new optimised native code". Several versions of the compiled native code get stored and over time could get hot-swapped again, but I have not found information about when this could be done.
Reading some other articles [1], [2] you'll find that .Net already had since the beginning 2 different quality JIT compilers. A faster one (equivalent to current tier-0) was used for debug builds, and an optimised one (equivalent to tier-1) for release builds. The thing is that you were stuck to one version or another, and no hot-swapping existed. There's also the question of why not using an interpreter rather than a not optimised JIT (and then the optimised JIT for frequently invoked methods), as done by Java Hot-Spot or some JavaScript engines. The main answer is that right now .Net does not have a good-enough interpreter (but it seems Mono has introduced a high-quality interpreter... so who knows if this could change in the future).
Finally, we have to take into account that .Net Core provides also Ready To Run images (R2R). This means that the compiler will compile your C# code (I guess this also exists for VB.Net and F#) to native code rather than CIL Bytecodes. This way when the application is run, the tier-0 compiler will not be used against that code, so start-up times are even faster, cause we already have the not-optimised native code, but the counting and recompiling if appropriate with the tier-1 JIT compiler still applies, so we can end up anyway with highly optimised code.
No comments:
Post a Comment