LLVM is the code generation backend used in several languages, like Rust and one of the many compilers for C and C++ (clang). Code generated by these compilers is considered “fast/performant” thanks to LLVM.
The problem with LLVM has always been that it takes a long time to produce code. The post in the link promises a new backend that produces a slower artifact, but does so 10-20x quicker. This is great for debug builds.
This doesn’t mean the compilation as a whole gets quicker. There are 3 steps in compilation
- Front end: transforms source code into an LLVM intermediation representation (IR)
- Backend: this is where LLVM comes in. It accepts LLVM IR and transforms it into machine code
- Linking: a separate program links the artifacts produced by LLVM.
How long does each step take? Really depends on the program we’re trying to compile. This blog post contains timings for one example program (https://blog.rust-lang.org/2023/11/09/parallel-rustc/) to give you an idea. It also depends on whether LLVM is asked to produce a debug build (not performant, but quicker to produce) or a release build (fully optimised, takes longer).
The 10-20x improvement described here doesn’t work yet for clang or rustc, and when it does it will only speed up the backend portion. Nevertheless, this is still an incredible win for compile times because the other two steps can be optimised independently. Great work by everyone involved.
In terms of runtime performance, the TPDE-generated code is comparable with and sometimes a bit faster than LLVM -O0.
I agree that front-ends are a big performance problem and both rustc and Clang (especially in C++ mode) are quite slow. For Clang with LLVM -O0, 50-80% is front-end time, with TPDE it's >98%. More work on front-end performance is definitely needed; maybe some things can be learned from Carbon. With mold or lld, I don't think linking is that much of a problem.
We now support most LLVM-IR constructs that are frequently generated by rustc (most notably, vectors). I just didn't get around to actually integrate it into rustc and get performance data.
> The 10-20x improvement described here doesn’t work yet for clang
Not sure what you mean here, TPDE can compile C/C++ programs with Clang-generated LLVM-IR (95% of llvm-test-suite SingleSource/MultiSource, large parts of the LLVM monorepo).
IMO the worst problem with LLVM isn't that it's slow, the worst problem is that its IR has poorly defined semantics or its team doesn't actually deliver those semantics and a bug ticket saying "Hey, what gives?" goes in the pile of never-never tickets, making it less useful as a compiler backend even if it was instant.
This is the old "correctness versus performance" problem and we already know that "faster but wrong" isn't meaningfully faster it's just wrong, anybody can give a wrong answer immediately and so that's not at all useful.
The really difficult thing would be to write a new compiler backend with a coherent IR that everybody understands and you'll stick to. Unfortunately you can be quite certain that after you've done the incredible hard work to build such a thing, a lot of people's assessment of your backend will be:
1. The code produced was 10% slower than LLVM, never use this, speed is all that matters anyway and correctness is irrelevant.
2. This doesn's support the Fongulab Splox ZV406 processor made for six years in the 1980s, whereas LLVM does, therefore this is a waste of time.
> The really difficult thing would be to write a new compiler backend with a coherent IR that everybody understands and you'll stick to.
But why would you bother, when with those same skills and a lot less time, you could fork LLVM, correct its IR semantics yourself (unilaterally), and then push people to use your fork?
(I.e. the EGCS approach to forcing the upstream to fix their shit.)
> This doesn's support the Fongulab Splox ZV406 processor made for six years in the 1980s, whereas LLVM does, therefore this is a waste of time.
AFAIK, the various Fongulab Sploxes that LLVM has targets for, are mostly there to act as forcing functions to keep around features that no public backend would otherwise rely on, because proprietary, downstream backends rely on those features. (See e.g. https://q3k.org/lanai.html — where the downstream ISA of interest is indeed proprietary, but used to be public before an acquisition; so the contributor [Google] upstreamed an implementation of the old public ISA target.)
Thanks for the link about Lanai although that site's cert has expired (very recently too) so it's slightly annoying (or of course bad guys are attacking me)
As to the first point, I suspect this is a foundational problem. Like, suppose you realise the concrete used to make a new skyscraper was the wrong mixture. In a sense this is a small change, there's nothing wrong with the elevators, the windows, cabling, furnishing, air conditioning, and so on. But, to "fix" this problem you need to tear down the skyscraper and replace it. Ouch.
I may be wrong, I have never tried to solve this problem. But I fear...
Or native code generation. Depends on what your performance goals are. It would be cool if there was a standard IR that languages could target - something more suitable than C.
Produce a dumb machine code quality, enough to bootstrapt it, and go from there.
Move away from classical UNIX compiler pipelines.
However in current times, I would rather invest into LLM improvements into generating executables directly, the time to mix AI into compiler development has come, and classical programming languages are just like doing yet another UNIX clone, in terms of value.
C and C++ compilers are deterministic and have guarantees of correctness similar to that of other languages (esp ones that share share the same llvm backend)
C++ compilers are not required to be deterministic and in practice are not, at least as far as "same source code produces same observable behavior". Things that can introduce non-determinism include the order in which symbols are linked, static variable initialization, floating point operations (unless you use strict mode, which is not mandated by the standard), and this is ignoring the obvious stuff like unspecified behavior which is specifically defined as behavior which can differ between different runs on the same system.
Also correctness guarantees? Hahaha... I'll pretend you didn't just claim C++ has correctness guarantees on par with other languages, LLVM or otherwise. C++ gives you next to nothing with respect to correctness guarantees.
Maybe HN should add "Don't accuse comments of being LLM generated" to the guidelines, because this sure seems like it'll be in the same category as people moaning that they were downvoted or more closely people saying "Have you read the link?"
We've talked about this but we're not adding it to the guidelines. It's already covered indirectly by the established guidelines, and "case law" (in the form of moderator replies) makes it explicit.
The problem with LLVM has always been that it takes a long time to produce code. The post in the link promises a new backend that produces a slower artifact, but does so 10-20x quicker. This is great for debug builds.
This doesn’t mean the compilation as a whole gets quicker. There are 3 steps in compilation
- Front end: transforms source code into an LLVM intermediation representation (IR)
- Backend: this is where LLVM comes in. It accepts LLVM IR and transforms it into machine code
- Linking: a separate program links the artifacts produced by LLVM.
How long does each step take? Really depends on the program we’re trying to compile. This blog post contains timings for one example program (https://blog.rust-lang.org/2023/11/09/parallel-rustc/) to give you an idea. It also depends on whether LLVM is asked to produce a debug build (not performant, but quicker to produce) or a release build (fully optimised, takes longer).
The 10-20x improvement described here doesn’t work yet for clang or rustc, and when it does it will only speed up the backend portion. Nevertheless, this is still an incredible win for compile times because the other two steps can be optimised independently. Great work by everyone involved.