From: Ralf Jung Date: Sun, 11 Oct 2015 21:15:16 +0000 (+0200) Subject: some work on the Rust post X-Git-Url: https://git.ralfj.de/web.git/commitdiff_plain/850c6e257a21a8b0c6609168c6d2928ec4c02398 some work on the Rust post --- diff --git a/ralf/_drafts/formalizing-rust.md b/ralf/_drafts/formalizing-rust.md index 7058dc1..06f3e76 100644 --- a/ralf/_drafts/formalizing-rust.md +++ b/ralf/_drafts/formalizing-rust.md @@ -9,55 +9,67 @@ Other have [said](https://www.youtube.com/watch?v=O5vzLKg7y-k) and [written](htt Let me just use this opportunity for a shameless plug: If you are curious and want to learn Rust, check out [Rust-101](https://www.ralfj.de/projects/rust-101/main.html), a hands-on Rust tutorial I wrote. I am going to assume some basic familiarity with Rust in the following. -Why do we want to formalize Rust? First of all, I'm (becoming) a programming languages researcher, and Rust is an interesting new langauge to study. Honestly, that's enough of a reason for me. +Why do we want to formalize Rust? First of all, I'm (becoming) a programming languages researcher, and Rust is an interesting new langauge to study. +It's going to be fun! Honestly, that's enough of a reason for me. But there's other reasons: It shouldn't be a surprise that [bugs](https://github.com/rust-lang/rust/issues/24292) have [been](https://github.com/rust-lang/rust/issues/26656) [found](https://github.com/rust-lang/rust/issues/24880) in Rust. There is lots of things that can be done about such bugs - my take on this is that we should try to *prove*, in a mathematical rigorous way, that no such bugs exist in Rust. This goes hand-in-hand with other approaches like testing, fuzzing and [static analysis](https://homes.cs.washington.edu/~emina/pubs/crust.ase15.pdf). -We (at my research group) are into formalizing things, so that's what we are going to do. +However, we (at my [research group](http://plv.mpi-sws.org/)) are into formalizing things, so that's what we are going to do. -## Limitations + -Let's just get this out of the way at the beginning: The first formal model is not going to cover all features of Rust. +## Scope + +Let me just get this out of the way at the beginning: The first formal model is not going to cover all features of Rust. Some of the missing pieces will hopefully be added later, some maybe not. Rust is huge, it's too large to be all handled in a single go. -So, the first version is not going to cover traits, trait objects, type polymorphism, `Drop`, slices, the `'static` lifetime, nor unwinding on panic. -I know, that's a long list. -Some of these I have some good ideas for, and I don't think they should be too hard (like slices or unwinding), others I am pretty sure will be hard (like `Drop`). +In particular, the first version is not going to cover automatic destructors, i.e., the `Drop` trait and the dropck mechanism. +This is, of course, a significant gap, even more so since dropck may well be the most subtle part of the type system, and the current version (as of Rust 1.3) is known to be unsound. +But we have to start somewhere, and even without `Drop`, there's a lot to be done. +We are also not going to have traits, unwinding and a few other pieces in the beginning. +This is mostly to avoid getting distracted by all the details, and keep the focus on the most important bits. +In terms of what we are interested in, none of these features significantly change the game. +I have some good ideas for how to handle them later, should we have the time for that. ## So what's left? You may wonder now, what are we even doing then? We are focusing on the core typesystem: Ownership and borrowing. -There are some interesting nuts to crack here: What we are looking for is a *semantic model* of Rust's type system. -Let me first explain why this is the way to go. +The goal is to prove that well-typed Rust programs are *memory and thread safe*, which, roughly speaking, means that all executed pointer accesses are valid, and there are no data races. +I will often just say "memory safety", but concurrency is definitely part of the story. + +## The syntactic approach and its limitations -The goal is to prove that well-typed Rust programs are *memory safe*, which, roughly speaking, means that all executed pointer accesses are valid. -We could now be looking at the checks the Rust compiler is doing, and proving that these checks imply memory safety. -This is of course and important property, and actually, this [has been done](ftp://ftp.cs.washington.edu/tr/2015/03/UW-CSE-15-03-02.pdf) already. -However, it is not good enough: The moment your program uses [`unsafe`](https://doc.rust-lang.org/stable/book/unsafe.html), this result does not apply to your program any more. -It is the whole purpose of `unsafe` to disable some compiler checks, and hence any proof relying solely on such compiler checks simply does not imply to programs using `unsafe`. +We could now be looking at the checks the Rust compiler is doing, and prove that these checks imply memory safety. +This is of course an important property, and actually, this [has been done](ftp://ftp.cs.washington.edu/tr/2015/03/UW-CSE-15-03-02.pdf) already. +However, this proof only gets us so far: The moment your program uses [`unsafe`](https://doc.rust-lang.org/stable/book/unsafe.html), this result does not apply to your program any more. +It is the whole purpose of `unsafe` to permit writing dangerous code that the compiler cannot check, and hence any proof relying solely on such compiler checks simply does not apply. In some sense, programs containing `unsafe` are not well-typed. (More strictly speaking, they are well-typed in a more liberal type system that above result does not, and cannot, apply to.) -Note that this is "viral": `Vec`, `RefCell`, `Rc` - all these and many more standard library data structures use `unsafe`, any if your program uses any of them, you're out of luck. +Note that this issue is viral: `Vec`, `RefCell`, `Rc` - all these and many more standard library data structures use `unsafe`, any if your program uses any of them, you're out of luck. -Intuitively, why do we even think that a program that uses `Vec`, but contains no `unsafe` *itself*, should be safe? -It's not (just) the compiler checks. +## Semantics to the rescue + +Intuitively, why do we even think that a program that uses `Vec`, but contains no `unsafe` block *itself*, should be safe? +It's not (just) the compiler checks making sure that we don't call `push` on a shared borrow. We also rely on `Vec` *behaving well* - or, I should rather say, *behaving well-typed*. -Yes, `Vec` uses `unsafe` - but it should not be possible to *observe* this is a well-typed programs, calling only the safe interface provided by `Vec`. -There are some rules that Rust uphelds, some invariants that safe, well-typed code automatically preserves, that people writing unsafe code have to maintain themselves. +Yes, `Vec` uses `unsafe` - but it should not be possible to *observe* this in a well-typed safe program, calling only the interface provided by `Vec`. +There are some rules that Rust enforces, some invariants that safe, well-typed code automatically upholds, that people writing unsafe code have to maintain themselves. `Vec` performs checks and bookkeeping to get its tracking of the size and capacitity of the container right. -`RefCell` counts the number of outstanding borrows, to make sure at run-time that no two mutable borrowsto the same cell exist. +`RefCell` counts the number of outstanding borrows, to make sure at run-time that no two mutable borrows to the same cell exist. `Rc` counts the number of references that are hold, and only if that reaches 0 it deallocates memory. Crucially though, it only hands out *immutable*, shared borrows to its conent. If any of these checks would be removed, the data structure would clearly be unsafe, and even safe code could witness that. But if the Rust authors got all these checks right, then safe code cannot even tell that unsafe operations are being performed. + + This means there is something to *manually prove* about unsafe code: It has to behave as if it were well-typed. This is a notion that can actually be captured formally. -Besdies the merely syntactic notion of well-typedness that relies on a type checker matching the syntax of the program against a bunch of rules, one can define a *semantic* notion of well-typedess that is defined solely in terms of what one can *do* with a piece of code. +Besides the merely syntactic notion of well-typedness that relies on a type checker matching the syntax of the program against a bunch of rules, one can define a *semantic* notion of well-typedess that is defined solely in terms of what one can *do* with a piece of code. For example, for a function to be syntactically well-typed, one goes ahead and looks at its code, and makes sure that code is written the right way. -For a function to be semantically well-typed, however, one checks what the function *does* on every possible input, how the output looks like. +For a function to be *semantically* well-typed, however, one checks what the function does on every possible input, how the output looks like. The actual code doing this, does not matter. -Only recently, research has been able to scale up such semantic methods to languages that combine state (i.e., mutable variables, a heap) with higher-order functions - languages like Rust, where I can take a `Fn` and store it in memory, for later use. +Only recently, research has been able to scale up such semantic methods to languages that combine state (i.e., mutable variables, a heap) with higher-order functions - languages like Rust, where I can take a closure (a `Fn{,Mut,Once}`) and store it in memory, for later use. So, that's what we are trying to do: Come up not only with a formal version of Rust's (syntactic) type system, but also with appropriate *semantics* for these types. We then have to show that the syntactic types make semantic sense. This recovers, in a very roundabout way, the result I mentioned above: