Ruling out operations such as comparison on uninitialized data is useful, because it means the compiler does not have to "remember" which exact bit pattern an uninitialized variable has!
A well-behaved (UB-free) program cannot observe that bit pattern anyway.
So each time an uninitialized variable gets used, we can just use *any* machine register---and for different uses, those can be different registers!
-So, one time we "look" at `x` it can be at least 150, and then when we look at it again it is less than 120, even though `x` did not change.
+So, one time we "look" at `x` it can be at least 150, and then when we look at it again it is at most 120, even though `x` did not change.
`x` was just uninitialized all the time.
That explains why our compiled example program behaves the way it does.
The real, physical hardware that we end up running the compiled program on is a very efficient *but imprecise* implementation of this abstract machine, and all the rules that Rust has for undefined behavior work together to make sure that this imprecision is not visible for *well-behaved* (UB-free) programs.
But for programs that do have UB, this "illusion" breaks down, and [anything is possible](https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html).
-UB-free programs can be made sense of by looking at their assembly, but *whether* a program has UB is impossible to tell on that level.
+*Only* UB-free programs can be made sense of by looking at their assembly, but *whether* a program has UB is impossible to tell on that level.
For that, you need to think in terms of the abstract machine.[^sanitizer]
[^sanitizer]: This does imply that tools like valgrind, that work on the final assembly, can never reliably detect *all* UB.