From bcd1c01fe0e2ec52899c28da97fc37d0186ba8d9 Mon Sep 17 00:00:00 2001 From: Ralf Jung Date: Mon, 14 Dec 2020 17:32:34 +0100 Subject: [PATCH 1/1] final tweaks and publish Pointers are Complicated II --- .../2020-12-14-provenance.md} | 39 ++++++++++++------- 1 file changed, 25 insertions(+), 14 deletions(-) rename personal/{_drafts/provenance-matters.md => _posts/2020-12-14-provenance.md} (85%) diff --git a/personal/_drafts/provenance-matters.md b/personal/_posts/2020-12-14-provenance.md similarity index 85% rename from personal/_drafts/provenance-matters.md rename to personal/_posts/2020-12-14-provenance.md index 7ad9f1a..48fc93c 100644 --- a/personal/_drafts/provenance-matters.md +++ b/personal/_posts/2020-12-14-provenance.md @@ -6,19 +6,20 @@ categories: rust Some time ago, I wrote a blog post about how [there's more to a pointer than meets the eye]({% post_url 2018-07-24-pointers-and-bytes %}). One key point I was trying to make is that -> *just because two pointers point to the same address, does not mean they are equal and can be used interchangeably.* +> *just because two pointers point to the same address, does not mean they are equal in the sense that they can be used interchangeably.* This "extra information" that distinguishes different pointers to the same address is typically called [*provenance*](https://rust-lang.github.io/unsafe-code-guidelines/glossary.html#pointer-provenance). -This post is a cautionary tale of what can go wrong when provenance is not considered sufficiently carefully in an optimizing compiler. +This post is another attempt to convince you that provenance is "real", by telling a cautionary tale of what can go wrong when provenance is not considered sufficiently carefully in an optimizing compiler. +The post is self-contained; I am not assuming that you have read the first one. There is also a larger message here about how we could prevent such issues from coming up in the future by spending more effort on the specification of compiler IRs. -I will show a series of three compiler transformations that each seem "intuitively justified", but when taken together they lead to a clearly incorrect result. +Below, I will show a series of three compiler transformations that each seem "intuitively justified", but when taken together they lead to a clearly incorrect result. I will use LLVM for these examples, but the goal is not to pick on LLVM---other compilers suffer from similar issues. The goal is to convince you that to build a correct compiler for languages permitting unsafe pointer manipulation such as C, C++, or Rust, we need to take IR semantics (and specifically provenance) more seriously. -I use LLVM for the examples because it is particularly easy to study with its single, (comparatively) well-documented IR that a lot of infrastructure evolved around. +I use LLVM for the examples because it is particularly easy to study with its single, extensively-documented IR that a lot of infrastructure evolved around. Let's get started! ## Warm-up: Why IRs need a precise semantics @@ -69,7 +70,7 @@ It may seem now that under this premise, it is impossible to perform the loop-in But that is not the case! So far, what we have seen is that the optimization is not *correct* when being performed on a C program. But when LLVM performs these optimizations, it does not consider the program to be written in C---it considers the program to be written in LLVM IR, which has a different semantics than C. -Specifically, signed integer overflow in LLVM IR yields a `poison` result. +Specifically, the [LLVM LangRef](https://llvm.org/docs/LangRef.html) says that signed integer overflow in LLVM IR yields a `poison` value. It is not UB to produce `poison`, it is just UB to use `poison` in certain ways (the details of this do not matter here). In a call to the optimized `sum_up(INT_MAX, 1, 0)`, the `s` variable introduced by loop-invariant code motion is unused, so the fact that its value is `poison` does not matter! @@ -82,7 +83,7 @@ The great thing about *correct* optimizations is that we can combine any number However, to make the argument that an optimization is *correct*, the exact semantics of LLVM IR (what the behavior of all possible programs is and when they have UB) needs to be documented. All involved optimizations need to exactly agree on what is and is not UB, to ensure that whatever code they produce will not be considered UB by a later optimization. This is exactly what we also expect from the specification of a programming language such as C, which is why I think we should consider compiler IRs as proper programming languages in their own right, and specify them with the same diligence as we would specify "normal" languages.[^ub-difference] -Sure, no human is going to write many programs in LLVM IR, but clang and rustc produce LLVM IR programs all the time, and as we have seen understanding the exact rules governing these programs is crucial to ensuring that the optimizations LLVM performs do not change program behavior. +Sure, no human is going to write many programs in LLVM IR, so their syntax barely matters, but clang and rustc produce LLVM IR programs all the time, and as we have seen understanding the exact rules governing the behavior of programs is crucial to ensuring that the optimizations LLVM performs do not change program behavior. [^cheat]: If now you feel like we somehow cheated, since we can always translate the program from C to LLVM IR, optimize there, and translate back, consider this: translating from LLVM IR to C is really hard! In particular, singed integer addition in LLVM IR can *not* be translated into signed integer addition in C, since the former is well-defined with `poison` result in case of overflow, but the latter says overflow is UB. C has strictly more UB than LLVM IR (for integer arithmetic), which makes translation in one direction easy, while the other direction is hard. @@ -98,7 +99,6 @@ With the warm-up done, we are now ready to consider some more tricky optimizatio We will look at three different optimizations LLVM can perform, and I will show that they *cannot all be correct* since the first and last program we are considering actually have *different behavior*. (More precisely: the last program has a possible behavior that was not possible for the first program.) This is only possible if at least one optimization changed program behavior in an incorrect way, but it is actually not entirely clear which optimization is the culprit. -The reasons this happens, I claim, is that the LLVM IR specification is not precise enough to truly evaluate whether the involved optimizations are *correct*. The sequence of examples is taken from [this talk](https://sf.snu.ac.kr/llvmtwin/files/presentation.pdf#page=32) by Chung-Kil Hur; it was discovered while working on a mathematically rigorous specification of LLVM. @@ -163,7 +163,7 @@ But which one is it? In an ideal world, we would have a sufficiently precise semantics for LLVM IR that we would just have to read the docs (or, even better, run some Miri-like tool) to figure out the answer. However, describing language semantics at this level of precision is *hard*, and full of trade-offs. -That's why the LLVM LangRef will not give us a clear answer here, and indeed obtaining a clear answer requires some decisions that have not been explicitly made yet. +The LLVM LangRef will not give us a clear answer here, and indeed obtaining a clear answer requires some decisions that have not been explicitly made yet. To proceed, we will use the three optimizations that we considered above as cues: assuming that the optimization is correct for LLVM IR, what does that tell us about the semantics? @@ -235,11 +235,11 @@ But in a language such as Rust, C, or C++ that supports pointer-integer casts, t We also learned that LLVM has a bug, but that was *not* the point of this blog post. The GCC developers [made exactly the same mistake](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82282), and I got word that MSVC and ICC have the same issue (though I do not know how to verify this). -And I cannot blame them; the way compiler development typically works, I think bugs like this are inevitable: when exactly UB arises in an IR is often only loosely specified, so evaluating whether some optimization is *correct* in the sense defined above is basically impossible. +And I cannot blame them; the way compiler development typically works, I think bugs like this are inevitable: when exactly UB arises in an IR is often only loosely specified, in many cases "by omission" (where cases not covered in the spec are implicitly UB), so evaluating whether some optimization is *correct* in the sense defined above can be very tricky or even impossible. Pointer provenance is just a particularly good (and subtle) example. -The warm-up above is another trivial case of this (albeit one where the existing specification is sufficient): loop-invariant code motion of arithmetic operations and UB on arithmetic overflow can both be *correct*, but not in the same IR. For another example, see [§2.3 of this paper](https://plv.mpi-sws.org/validc/paper.pdf) (Figure 3 contains the code) which shows how a sequence of two optimizations can lead to a miscompilation, where the first optimization is *correct* under the LLVM concurrency model, and the second optimization is *correct* under the C++11 concurrency model---but there is no concurrency model under which *both* optimizations are correct, so each compiler (or rather, each compiler IR) needs to pick one or the other. Finally, this [paper on `undef` and `poison`](https://www.cs.utah.edu/~regehr/papers/undef-pldi17.pdf) gives examples for optimizations that are broken by the presence of `undef` in LLVM, and describes some of the trade-offs that arise when defining the semantics of `poison`. +Again miscompilations arise because the consequence of a statement in one place of the specification (`undef` picks a fresh value at each use) are not considered elsewhere (testing an integer for equality with zero dos not imply it is zero; it could also be `undef`). Which brings me to my main conclusion for this post: to avoid the problem of incompatible optimizations, I think we need to take compiler IRs more serious as programming languages in their own right, and give them a precise specification---including all the UB. Now, you may object by saying that LLVM has an [extensive LangRef](https://llvm.org/docs/LangRef.html), and still, by reading the LLVM specification one could convince oneself that each of the three optimizations above is correct, which as we have seen is contradictory. @@ -254,11 +254,22 @@ Doing so makes it obvious that a pointer has provenance, since otherwise it is i This is my bar for what I consider a sufficiently precise language specification: it needs to contain all the information needed such that writing a UB-checking interpreter is just a matter of "putting in the work", but does not require introducing anything new that is not described in the specification. -Ideally, the interpreter is not just hypothetical. -This is really my main motivation for working on the [Miri interpreter](https://github.com/rust-lang/miri/). -Of course, practically speaking, its main purpose is to help unsafe code authors avoid UB, but for me personally, I find it equally important that it helps us think about the semantics of Rust and MIR in a different way. +The good news is that the process of developing these more precise specifications is already underway! -I hope this was educational, and thanks for reading. :) +On the Rust side, this is mainly apparent in the [Miri interpreter](https://github.com/rust-lang/miri/), which is a concrete realization of the "hypothetical" interpreter that I mentioned above. +In fact, this is why I originally started working on Miri. +Nowadays, its main purpose is to help unsafe code authors avoid UB, but for me personally, I find it equally important that it helps us think about the semantics of Rust and MIR in a different way. +It also feeds back into the design of the UB rules by discovering patterns that people want or need to use but that are not currently accepted by Miri. + +On the LLVM side, the main development in this area is [Alive](https://blog.regehr.org/archives/1722), a tool that can automatically validate[^validate] optimizations performed by LLVM. +Alive has found [many bugs in LLVM optimizations](https://github.com/AliveToolkit/alive2/blob/master/BugList.md), and indeed much of the recent dialog with the LLVM community aimed at a more precise IR semantics is pushed by the people building Alive, lead by Nuno P. Lopes and John Regehr. + +[^validate]: A note on terminology: "validating" an optimization means that given the program text before and after the optimization, the tool will try to prove that this particular transformation is correct. This is in contrast to "verifying" an optimization where a once-and-forall proof is carried out showing that the optimization will always perform a correct transformation. Verification gives a much stronger result, but is also extremely difficult to carry out, so validation is a great middle-ground that is still able to find plenty of bugs. + +Progress on these specification efforts is slow, though, in particular when it turns out that [LLVM IR semantics should be changed](https://www.cs.utah.edu/~regehr/papers/undef-pldi17.pdf). +I hope this post can raise awareness for the subtle problems optimizing compilers are facing, and convince some people that figuring out the specification of compiler IRs is an important and interesting problem to work on. :) + +That's all I have for today, thanks for sticking with me! As usual, this post can be discussed in the Rust forums. I am curious what your thoughts are on how we can build compilers that do not suffer from the issues I have discussed here. -- 2.30.2