From: Ralf Jung Date: Wed, 2 Dec 2020 17:10:30 +0000 (+0100) Subject: start writing provenance-optimization blog post X-Git-Url: https://git.ralfj.de/web.git/commitdiff_plain/2d883e4ef15ac533c8c3c2378cfb132d210467d2?ds=sidebyside start writing provenance-optimization blog post --- diff --git a/personal/_drafts/provenance-matters.md b/personal/_drafts/provenance-matters.md new file mode 100644 index 0000000..946044b --- /dev/null +++ b/personal/_drafts/provenance-matters.md @@ -0,0 +1,144 @@ +--- +title: "Pointers Are Complicated II, or: Why Provenance Matters" +categories: rust +--- + +Some time ago, I wrote a blog post about how [there's more to a pointer than meets the eye]({% post_url 2018-07-24-pointers-and-bytes %}). +One key point I was trying to make is that + +> *just because two pointers point to the same address, does not mean they are equal and can be used interchangeably.* + +This "extra information" that comes with a pointer is typically called [*provenance*](https://rust-lang.github.io/unsafe-code-guidelines/glossary.html#pointer-provenance). +This post is a cautionary tale of what can happen when provenance is not considered sufficiently carefully in an optimizing compiler. + + + +As an example, I will show a series of three compiler transformation that each seem "intuitively justified" based on wording in the C standard and some common understanding of how pointers work. +We will use LLVM for these examples, but the goal is not to pick on LLVM -- other compilers suffer from similar issues. +The goal is to convince you that we need to take provenance seriously if we want to build a correct compiler for languages permitting unsafe pointer manipulation such as C, C++, or Rust. + +## Warm-up: Why IRs need a precise semantics + +As a warm-up, let me try to convince you that compiler IRs such as LLVM IR need a precise (and precisely documented) semantics. +If you are already familiar with the idea of treating compiler IRs as proper programming languages in their own right, you can skip to the next section. + +Consider the following simple (and contrived, for the sake of this example) piece of C code: +{% highlight c %} +int sum_up(int i, int j, int n) { + int result = 0; + while (n > 0) { + result += i + j; + n -= 1; + } +} +{% endhighlight %} +One transformation the compiler might want to do is to move the addition `i+j` out of the loop, to avoid computing the sum each time around the loop: +{% highlight c %} +int sum_up(int i, int j, int n) { // optimized version + int result = 0; + int s = i + j; + while (n > 0) { + result += s; + n -= 1; + } +} +{% endhighlight %} +However, that transformation is actually incorrect. +If we imagine a caller using this function as `sum_up(INT_MAX, 1, 0)`, then this is a perfectly correct way to call `sum_up`: the loop is never entered, so the overflowing addition `INT_MAX+1` is never performed. +However, after the desired optimization, the program now causes a signed integer overflow, which is UB (Undefined Behavior) and thus May Never Happen! + +One might be tempted to ignore this problem because the UB on integer overflow is a compiler-only concept; every target supported by the compiler will do the obvious thing and just produce an overflowing result. +However, there might be other compiler passes running after the optimization we are considering. +One such pass might inline `sum_up`, and another pass might notice the `INT_MAX+1` and replace it by `unreachable`, and another pass might then just remove all our code since it is unreachable. +Each of these passes has a good reason to exist (it can help real code become a lot faster or help prune dead code), but if we combine them all with our loop hoisting optimization, the result is a disaster. + +I am convinced that the only way to avoid such problems is to find a way to justify the correctness of each optimization *in isolation*. +Each optimization must be *correct* for any possible program, where *correct* means that the optimized program must only "do things" that the original program could have done as well. +(This is basically the "as-if" rule in the C standard, and is typically called "refinement" in the academic literature.) +In particular, no optimization must ever introduce UB into a UB-free program. + +It may seem now that under this premise, it is impossible to perform the loop hoisting optimization we are considering. +But that is not the case! +We can perform the optimization, *we just cannot perform it in C*. +Instead, we have to perform it in LLVM IR (or any other IR with a suitable semantics). +Specifically, signed integer overflow in LLVM yields a `poison` result. +It is not UB to produce `poison`, it is just UB to use `poison` in certain ways (the details do not matter here). +In a call to `sum_up(INT_MAX, 1, 0)`, the `s` variable introduced by loop hoisting is unused, so the fact that its value is `poison` does not matter! + +Due to this behavior of signed integer overflow, the loop hoisting optimization is *correct* if we consider it as an optimization on programs that are written in LLVM IR.[^cheat] +In particular, this means we can freely combine this optimization with any other optimization that is *correct* in LLVM IR (such as inlining, replacing definite UB by `unreachable`, and removing unreachable code), and we can be sure that the result obtained after all these optimizations is a correct compilation of our original program. + +However, to make the argument that an optimization is *correct*, the exact semantics of LLVM IR (what the behavior of all possible programs is and when they have UB) needs to be precisely and unambiguously documented. +All involved optimizations need to exactly agree on what is and is not UB, to ensure that whatever code they produce will not be considered UB by a later optimization.[^ub-difference] +This is exactly what we also expect from the specification of a programming language such as C, which is why I think we should consider compiler IRs as proper programming languages in their own right, and specify them with the same diligence as we would specify "normal" languages. +Sure, no human is going to write many programs in LLVM IR, but clang and rustc produce LLVM IR programs all the time, and as we have seen understanding the exact rules governing these programs is crucial to ensuring that the optimizations LLVM performs do not change program behavior. + +[^cheat]: If now you feel like we somehow cheated, since we can always translate the program from C to LLVM IR, optimize there, and translate back, consider this: translating from LLVM IR to C is really hard! In particular, singed integer addition in LLVM IR can *not* be translated into signed integer addition in C, since the former is well-defined with `poison` result in case of overflow, but the latter says overflow is UB. C has, in a sense, strictly more UB than LLVM IR, which makes translation in one direction easy, while the other direction is hard. + +[^ub-difference]: In fact, I would say that two different variants of the IR with different rules for UB are really *two different programming languages*. A program that is well-defined in one language may have UB in another, so great care needs to be taken when the program is moved from being governed by one set of rules to another. + +*Take-away:* If we want to be able to justify the correctness of a compiler in a modular way, considering only one optimization at a time, we need to perform these optimizations in an IR that has a precise specification of all aspects of program behavior, including UB. +Then we can, for each optimization separately, consider the question: does the optimization ever change program behavior, and does it ever introduce UB into UB-free programs? + +## How to break this code in 3 easy steps + +With the warm-up done, we are now ready to consider some more tricky optimizations. +We will look at three different optimizations LLVM can perform, and I will show that they *cannot all be correct* since the first and last program we are considering actually have *different behavior*. +(More precisely: the last program has a possible behavior that was not possible for the first program.) +This is only possible if at least one optimization changed program behavior in an incorrect way. +To determine which optimization is at fault, we will need a more precise specification of LLVM IR than what currently exists. + +Here is the source program: +{% highlight c %} +char p[1], q[1] = {0}; +int ip = (int)(p+1); +int iq = (int)q; +if (iq == ip) { + *(char*)iq = 10; + print(q[0]); +} +{% endhighlight %} +We are using C syntax here, but remember that we just use C syntax as a convenient way to write programs in LLVM IR. +This program has two possible behaviors: either `ip` (the address one-past-the-end of `p`) and `iq` (the address of `q`) are different, and nothing is printed. +Or the two are equal, in which case the program will print "10" (`iq` is the result of casting `q` to an integer, so casting it back will yield the original pointer, or at least a pointer pointing to the same object / location in memory). + +The first "optimization" we will perform is to exploit that `iq == ip`, so we can replace all `iq` by `ip`, and subsequently inline the definition of `ip`: +{% highlight c %} +char p[1], q[1] = {0}; +int ip = (int)(p+1); +int iq = (int)q; +if (iq == ip) { + *(char*)(int)(p+1) = 10; + print(q[0]); +} +{% endhighlight %} + +The second optimization notices that we are taking a pointer `p+1`, casting it to an integer, and casting it back, so we can remove the cast roundtrip: +{% highlight c %} +char p[1], q[1] = {0}; +int ip = (int)(p+1); +int iq = (int)q; +if (iq == ip) { + *(p+1) = 10; + print(q[0]); +} +{% endhighlight %} + +The final optimization notices that `q` is never written to, so we can replace `q[0]` by its initial value `0`: +{% highlight c %} +char p[1], q[1] = {0}; +int ip = (int)(p+1); +int iq = (int)q; +if (iq == ip) { + *(p+1) = 10; + print(0); +} +{% endhighlight %} + +However, this final program is different from the first one! +Specifically, the final program will either print nothing or print "0", while the original program *could never print "0"*. +This shows that the sequence of three optimizations we performed, as a whole, is *not correct*. + +### Playing the blame game + +#### Footnotes