X-Git-Url: https://git.ralfj.de/web.git/blobdiff_plain/8f3a9b876ae6348f1197da06d9b209337d13f4de..94d0b1594ae4fffbf2999b59c411f6755f46d47a:/personal/_posts/2019-05-15-typeclasses-exponential-blowup.md diff --git a/personal/_posts/2019-05-15-typeclasses-exponential-blowup.md b/personal/_posts/2019-05-15-typeclasses-exponential-blowup.md index e59917b..4c9f3cf 100644 --- a/personal/_posts/2019-05-15-typeclasses-exponential-blowup.md +++ b/personal/_posts/2019-05-15-typeclasses-exponential-blowup.md @@ -4,10 +4,10 @@ categories: research coq forum: https://coq.discourse.group/t/blog-post-exponential-blowup-when-using-unbundled-typeclasses-to-model-algebraic-hierarchies/289 --- -When formalizing a proof in an interactive theorem prover like [Coq](https://coq.inria.fr/), one reoccuring issue is the handling of algebraic hierarchies. +When formalizing a proof in an interactive theorem prover like [Coq](https://coq.inria.fr/), one reoccurring issue is the handling of algebraic hierarchies. Such hierarchies are everywhere: some operations are associative, while others commute; some types have an equivalence relation, while others also have a (pre-)order or maybe even a well-ordering; and so on. So the question arises: What is the best way to actually encode these hierarchies in Coq? -Coq offers two mechanism that are suited to solve this task: typeclasses and canonical structures. +Coq offers two mechanisms that are suited to solve this task: typeclasses and canonical structures. Both can be instrumented in different ways to obtain a (more or less) convenient-to-use algebraic hierarchy. A common approach using typeclasses is the ["unbundled" approach by Bas Spitters and Eelis van der Weegen](http://www.eelis.net/research/math-classes/mscs.pdf). However as we learned the hard way in the Coq formalization of the [original Iris paper](https://iris-project.org/pdfs/2015-popl-iris1-final.pdf), this approach quickly leads to terms that seem to be exponential in size. @@ -15,8 +15,8 @@ In this post, I will explain the cause of this exponential blowup. -I should note that this observation is not new, it already occurs in [François Garillot's PhD thesis](https://pastel.archives-ouvertes.fr/pastel-00649586). -The goal of this post is to provide a more self-contained presentation, not requiring all the context set up in that thesis. +I should note that this observation is not new, it already occurs in [François Garillot's PhD thesis](https://pastel.archives-ouvertes.fr/pastel-00649586), a [paper by the same author](https://hal.inria.fr/inria-00368403v2/document) and even the authors of the "unbundled" paper already note having performance problems when scaling up. +The goal of this post is to provide a more self-contained presentation, not requiring all the context set up in that thesis and going into more details than the brief remarks in the two papers. ## Unbundled groups @@ -172,12 +172,12 @@ That's 4 nested indices, and hence the terms have size `O(n^5)`. More generally speaking, the term size is *exponential* in the height of our hierarchy. That hurts. -[^1]: Note that the *number* of indices does not matter, like the fact that `Monoid` has both an `Op` and a `Unit` index. That just affects the constant factor. It's the nesting of indices that gets us here. +[^1]: Note that the *number* of indices does not matter, like the fact that `Monoid` has both an `Op` and a `Unit` index. That just affects the constant factor. It's the nesting of indices that gets us into trouble here. ## Conclusion We have seen that the number of nested indices of a typeclass has exponential impact on the size of proof terms that arise when deriving typeclass instances for compound types. -This effect has grinded the original Iris coq development back in 2015 to a halt: compiling the correctness proof of the logic itself took around 3.5 hours and needed more than the 4GB of RAM than my laptop had back then. +This effect has grinded the original Iris coq development back in 2015 to a halt: compiling the correctness proof of the logic itself took around 3.5 hours and needed more than the 4GB of RAM that my laptop had back then. In my view, this basically rules out the unbundled approach for any serious formalization effort. But what are the alternatives? @@ -192,7 +192,7 @@ Class Monoid (A: Type) `{Op A, Unit A} := { {% endhighlight %} This limits the "index depth" to 2, and thus limits term complexity to `O(n^3)`. -Still not great, but at least it doesn't grow further as we add more sublcasses. +Still not great, but at least it doesn't grow further as we add more subclasses. However, this scheme does not support diamonds in the hierarchy as instances will be duplicated. Also, the instance we implicitly added by writing `:>`, namely that `Monoid A -> Semigroup A`, means we'll check all `Monoid` instances any time we just want a `Semigroup`. This leads to exponential complexity when performing backtracking typeclass search: a `Semigroup (A*A)` can be obtained either by searching for a `Monoid (A*A)` or via `prod_semigroup`, so in case of backtracking Coq will go through a combinatorial explosion when trying all possible ways to derive `Semigroup (A*A*A*...)`.[^2]