In this model, pointers are not integers, but they are at least simple.
However, this simple model starts to fall apart once you consider pointer-integer casts.
-In miri, casting a pointer to an integer does not actually do anything, we now just have an integer variable (i.e., its *type* says it is an integer) whose *value* is a pointer (i.e., an allocation-offset pair).[^3]
+In miri, casting a pointer to an integer does not actually do anything, we now just have an integer variable (i.e., its *type* says it is an integer) whose *value* is a pointer (i.e., an allocation-offset pair).
However, multiplying that "integer" by 2 leads to an error, because it is entirely unclear what it means to multiply such an abstract pointer by 2.
-[^3]: This disconnect between the type and the value may seem somewhat strange, but we are actually not very concerned with *types* at this point. Types serve to classify values with the goal of establishing certain guarantees about a program, so we can only really start talking about types once we are done defining our set of values and program behaviors. Still, this means there are safe programs that miri cannot execute, such as `(Box::new(0).into_raw() as usize) * 2`. To avoid trouble with multiplication, I proposed to only allow "normal" integer values for integer types when doing [compile-time function evaluation]({{ site.baseurl }}{% post_url 2018-07-19-const %}). However, this makes pointer-integer casts an unsafe operation, because they do not actually produce a "fully operational" integer.
-
This is the most lazy thing to do, and we do it because it is not clear what else to do -- in our abstract machine, there is no single coherent "address space" that all allocations live in, that we could use to map every pointer to a distinct integer.
-Every allocation is just identified by a (unobservable) ID.
-We could now start to enrich this model with extra data like a base address for each allocation, and somehow use that when casting integer back to pointers... but that's where it gets really complicated, and anyway discussing such a model is not the point of this post.
+Every allocation is just identified by an (unobservable) ID.
+We could now start to enrich this model with extra data like a base address for each allocation, and somehow use that when casting an integer back to a pointer... but that's where it gets really complicated, and anyway discussing such a model is not the point of this post.
The point it to discuss the *need* for such a model.
If you are interested, I suggest you read [this paper](http://www.cis.upenn.edu/%7Estevez/papers/KHM+15.pdf) that explores the above idea of adding a base address.
I hope I made a convincing argument that integers are not the only data one has to consider when formally specifying low-level languages such as C++ or (the unsafe parts of) Rust.
However, this means that a simple operation like loading a byte from memory cannot just return a `u8`.
+Imagine we [implement `memcpy`](https://github.com/alexcrichton/rlibc/blob/defb486e765846417a8e73329e8c5196f1dca49a/src/lib.rs#L39) by loading (in turn) every byte of the source into some local variable `v`, and then storing it to the target.
What if that byte is part of a pointer? When a pointer is a pair of allocation and offset, what is its first byte?
-We cannot represent this as a `u8`.
+We have to say what the value of `v` is, so we have to find some way to answer this question.
+(And this is an entirely separate issue from the problem with multiplication that came up in the last section. We just assume some abstract type `Pointer`.)
+
+We cannot represent a byte of a pointer as an element of `0..256`.
Instead, we will remember both the pointer, and which byte of the pointer we got.
+So, a byte is now *either* an element of `0..256` ("raw bits"), *or* the n-th byte of some abstract pointer.
If we were to implement our memory model in Rust, this might look as follows:
{% highlight rust %}
enum ByteV1 {
}
{% endhighlight %}
For example, a `PtrFragment(ptr, 0)` represents the first byte of `ptr`.
-This way, we can "take apart" a pointer into the individual bytes that represent this pointer in memory, and assemble it back together.
+This way, `memcpy` can "take apart" a pointer into the individual bytes that represent this pointer in memory, and copy them separately.
On a 32bit architecture, the full value representing `ptr` consists of the following 4 bytes:
```
[PtrFragment(ptr, 0), PtrFragment(ptr, 1), PtrFragment(ptr, 2), PtrFragment(ptr, 3)]
```
-Such a representation supports performing all byte-level "data moving" operations on pointers, like implementing `memcpy` by copying one byte at a time.
+Such a representation supports performing all byte-level "data moving" operations on pointers, which is sufficient for `memcpy`.
Arithmetic or bit-level operations are not fully supported; as already mentioned above, that requires a more sophisticated pointer representation.
## Uninitialized Memory
}
{% endhighlight %}
With `Uninit`, we can easily argue that `x` is either `Uninit` or `1`, and since replacing `Uninit` by `1` is okay, the optimization is easily justified.
-Without `Uninit`, however, `x` is either "some arbitrary bit pattern" or `1`, and doing the same optimization becomes much harder to justify.[^4]
+Without `Uninit`, however, `x` is either "some arbitrary bit pattern" or `1`, and doing the same optimization becomes much harder to justify.[^3]
-[^4]: We could argue that we can reorder when the non-deterministic choice is made, but then we have to prove that the hard to analyze code does not observe `x`. `Uninit` avoids that unnecessary extra proof burden.
+[^3]: We could argue that we can reorder when the non-deterministic choice is made, but then we have to prove that the hard to analyze code does not observe `x`. `Uninit` avoids that unnecessary extra proof burden.
Finally, `Uninit` is also a better choice for interpreters like miri.
Such interpreters have a hard time dealing with operations of the form "just choose any of these values" (i.e., non-deterministic operations), because if they want to fully explore all possible program executions, that means they have to try every possible value.