Last updated on
Proofs on Lists and Trees
Welcome to week 4 of CS-214 β Software Construction! The aim of this exercise set is to familiarize you write equational reasoning.
As usual, βοΈ indicates the most important exercises and questions and π₯ indicates the most challenging ones.
You do not need to complete all exercises to succeed in this class, and you do not need to do all exercises in the order they are written.
We strongly encourage you to solve the exercises on paper first, in groups. After completing a first draft on paper, you may want to check your solutions on your computer. To do so, you can download the scaffold code (ZIP).
Proofs on lists
Composition of maps βοΈ
Prove that the following equivalence holds by using inductive reasoning:
β (l: List[T]),
l.map(f).map(g) === l.map(f `andThen` g)
(β
is short for βfor allβ, so the statement above says: βfor all lists l
of type List[T]
, l.map(f).map(g)
equals l.map(f `andThen` g)
β.)
Here are the relevant axioms for this proof:
Nil.map(f) === Nil
(x :: xs).map(f) === f(x) :: (xs.map(f))
(f `andThen` g)(x) === g(f(x))
Be very precise in your proof:
- Make sure to state what you want to prove, and what your induction hypothesis is, if any.
- Clearly state which axiom you use at each step, and when/if you use the induction hypothesis.
- Use only one axiom / hypothesis at each step: applying two axioms requires two steps.
- Underline the part of each expression on which you apply the axiom or hypothesis at each step.
A more complicated proof (FP midterm 2016)
We want to implement a function sum(list: List[Int]): Int
, which returns the sum of the elements of a list of Int
-s. We can easily specify that function as follows:
(1) sum(Nil) === 0
(2) sum(x :: xs) === x + sum(xs)
We can implement sum
using foldLeft
:
def betterSum(list: List[Int]): Int =
list.foldLeft(0)(add)
def add(a: Int, b: Int): Int = a + b
Why not naively translate the specification into Scala?
We can also naively translate the specification into a Scala implementation, but it’s a non-tail-recursive function. There’s no need to understand tail recursion now, we will learn about it in week 11!
However, that implementation is not obviously correct. We would like to prove that it is correct for all lists of integers. In other words, we want to prove that
list.foldLeft(0)(add) === sum(list)
for all lists of integers.
In addition to the specification of sum
(axioms 1
-2
), you may use the following axioms:
(3) Nil.foldLeft(z)(f) === z
(4) (x :: xs).foldLeft(z)(f) === xs.foldLeft(f(z, x))(f)
(5) add(a, b) === a + b
(6) a + b === b + a
(7) (a + b) + c === a + (b + c)
(8) a + 0 === a
Axioms 3-5 follow from the implementations of foldLeft
and add
. Axioms 6-8 encode well-known properties of Int.+
: commutativity, associativity, and neutral element.
Your task: Prove the following lemma by structural induction:
β (l: List[Int]) (z: Int), l.foldLeft(z)(add) === z + sum(l)
From that lemma, we can (with the help of axioms 6 and 8) derive that the implementation of betterSum
is correct by substituting 0
for z
in the lemma. You are not asked to do that last bit.
A hard proof on foldLeft
and foldRight
π₯
We have now seen two list-traversal functions. One, from left to right, called foldLeft
; and another, from right-to-left, called foldRight
. In this exercise, we’ll see how to relate them to each other.
Let’s look back at the definitions of foldLeft
and foldRight
:
def foldLeft[A, B](base: B, f: (B, A) => B)(l: List[A]): B = l match
case Nil => base
case Cons(x, xs) => foldLeft(f(base, x), f)(xs)
def foldRight[A, B](f: (A, B) => B, base: B)(l: List[A]): B = l match
case Nil => base
case Cons(x, xs) => f(x, foldRight(f, base)(xs))
Let’s see what they have in common on a concrete example:
-
Using the substitution method, reduce the expression
foldLeft(base, f)(a :: b :: c :: d :: Nil)
for an arbitrary functionf
and arbitrary valuesa, b, c, d
. Similarly, reduce the expressionfoldRight(g, base)(a :: b :: c :: d :: Nil)
for an arbitrary functiong
and arbitrary valuesa, b, c, d
.Solution
foldLeft(base, f)(a :: b :: c :: d :: Nil) = foldLeft(f(base, a), f)(b :: c :: d :: Nil) = foldLeft(f(f(base, a), b), f)(c :: d :: Nil) = foldLeft(f(f(f(base, a), b), c), f)(d :: Nil) = foldLeft(f(f(f(f(base, a), b), c), d), f)(Nil) = f(f(f(f(base, a), b), c), d)
foldRight(g, base)(a :: b :: c :: d :: Nil) = g(a, foldRight(g, base)(b :: c :: d :: Nil)) = g(a, g(b, foldRight(g, base)(c :: d :: Nil))) = g(a, g(b, g(c, foldRight(g, base)(d :: Nil)))) = g(a, g(b, g(c, g(d, foldRight(g, base)(Nil))))) = g(a, g(b, g(c, g(d, base))))
-
Based on the example above, conjecture a relation between
foldLeft
andfoldRight
.Solution
Given type
A
andB
, for any listl: List[A]
, base valuebase: B
and functionf: (A, B) => B
, the result offoldRight(f, base)(l)
is the same asfoldLeft(base, (x, y) => f(y, x))(reverse(l))
.
Now, let’s prove this property!
Similar to the above exercises, we can specify foldLeft
, foldRight
and reverse
as follows:
(1) Nil.foldLeft(z)(g) === z
(2) (x :: xs).foldLeft(z)(g) === xs.foldLeft(g(z, x))(g)
(3) Nil.foldRight(f)(z) === z
(4) (x :: xs).foldRight(f)(z) === f(x, xs.foldRight(f)(z))
(5) Nil.reverse === Nil
(6) (x :: xs).reverse === xs.reverse ++ (x :: Nil)
(7) Nil ++ l === l
(8) (x :: xs) ++ l === x :: (xs ++ l)
Your task: Prove the following lemma (called fold_left_rev_right
) by structural induction:
fold_left_rev_right: β [A, B] (l: List[A]) (f: (A, B) => B) (z: B),
l.reverse.foldRight(f)(z) === l.foldLeft(z)((b: B, a: A) => f(a, b))
Simple Stack Machine Interpreter π₯
A stack machine uses a last-in, first-out (LIFO) stack to hold short-lived temporary values. Assume the values are integers. Then, a stack can be viewed as a list of integers:
type Stack = List[Int]
src/main/scala/lists/Interpreter.scala
Most of a stack machine’s instructions assume that operands will be from the stack, and results placed in the stack. We consider a simple stack machine that only supports the following instructions:
Push(v)
: Push an integerv
into the top of the stack;Pop
: Pop out the top of the stack;Add
,Minus
,Mul
,Div
: Pop out the top two elements of the stack, use them as operands of the operation, then push the result back to the stack.
Exceptions:
- For
Div
, aDividedByZeroException
exception should be thrown when the divisor is zero. - A
NotEnoughOperandsInStackException
exception should be thrown when the stack doesn’t have enough values for any instruction.
A program is a list of instructions:
type Program = List[Instruction]
src/main/scala/lists/Interpreter.scala
For example, consider the expression A * (B - C) + (D + E)
, written in reverse Polish notation as A B C - * D E + +
. Compiling and running this on a simple stack machine would take the form:
Program # stack contents (leftmost = top = most recent):
Push(A) # A
Push(B) # B A
Push(C) # C B A
Minus # B-C A
Mul # A*(B-C)
Push(D) # D A*(B-C)
Push(E) # E D A*(B-C)
Add # D+E A*(B-C)
Add # A*(B-C)+(D+E)
(The example is taken from Wikipedia.)
In this section, your task is to implement a simple stack machine interpreter.
That is, implement a method interpProg
which takes an initial stack and a
program, then returns the stack after execution of the program. Assume you have
access to a function interpInst
function, which computes the new state of
a stack given the old state and a single instruction. You must use foldLeft
to
implement it.
def interpProg(stack: Stack, program: Program): Stack =
???
src/main/scala/lists/Interpreter.scala
Now, you need to implement the interpInst
function:
def interpInst(stack: Stack, inst: Instruction): Stack =
???
src/main/scala/lists/Interpreter.scala
Proofs on trees
In this part, we’ll focus on structural induction on trees. We’ll also see how to write proofs about arbitrary structurally recursive functions, not just map
, fold
, and other functions for which axioms were given by the problem statement.
We’ll use a Expr
type similar to the one from the calculator lab as our motivating example.
For simplicity, we remove the Div
and Neg
case and use BigInt
for the numbers.
Here is the definition of Expr
:
enum Expr:
case Number(value: BigInt)
case Var(name: String)
case Add(e1: Expr, e2: Expr)
case Minus(e1: Expr, e2: Expr)
case Mul(e1: Expr, e2: Expr)
src/main/scala/trees/Expr.scala
β¦ and of evaluate
, written here with a function context. Notice that we return a BigInt
(not an Option[BigInt]
) for simplicity, which means that undefined variables have a default value (and that there are no overflows, since we’re using BigInt
):
def evaluate(ctx: String => BigInt, e: Expr): BigInt = e match
case Number(value) => value
case Var(name) => ctx(name)
case Add(e1, e2) => evaluate(ctx, e1) + evaluate(ctx, e2)
case Minus(e1, e2) => evaluate(ctx, e1) - evaluate(ctx, e2)
case Mul(e1, e2) => evaluate(ctx, e1) * evaluate(ctx, e2)
src/main/scala/trees/Expr.scala
To reason about arithmetic in the rest of this exercise, use the following axioms:
ZeroPlusZero
0 + 0 === 0
CommuPlus
a + b === b + a
CommuMul
a * b === b * a
BoolEq
(a == b) === true β a === b
TrueAnd
a && b === true β ((a === true) and (b === true))
Warm up: mirror
Let’s warm up with a simple operation, mirror
:
def mirror(e: Expr): Expr = e match
case Number(value) => Number(value)
case Var(name) => Var(name)
case Add(e1, e2) => Add(mirror(e2), mirror(e1))
case Minus(e1, e2) => Minus(mirror(e2), mirror(e1))
case Mul(e1, e2) => Mul(mirror(e2), mirror(e1))
src/main/scala/trees/Expr.scala
mirror
mirrors the left and right children of an expression tree.
For example, mirror(Add(Number(1), Number(2)))
is Add(Number(2), Number(1))
.
Writing axioms from definitions
We have seen three ways to reason about the execution of code until now:
- The substitution method with concrete variables, to understand how code works on a specific input;
- Applying rewrite rules (such as
MapCons
orIH
), to prove properties valid for all inputs; - The substitution method with variables, to reason about strongest postconditions.
In formal proofs we applied only (2), rewrite rules, but these techniques are related: uses of the substitution method can be converted into applications of rewrite rules. For example, for evaluate
, applying the substitution method gives us the following axioms for all value
, name
, e1
, and e2
:and are given:
EvalNumber
:
evaluate(ctx, Number(value)) === value
EvalVar
:
evaluate(ctx, Var(name)) === ctx(name)
EvalAdd
:
evaluate(ctx, Add(e1, e2)) === evaluate(ctx, e1) + evaluate(ctx, e2)
EvalMinus
:
evaluate(ctx, Minus(e1, e2)) === evaluate(ctx, e1) - evaluate(ctx, e2)
EvalMul
:
evaluate(ctx, Mul(e1, e2)) === evaluate(ctx, e1) * evaluate(ctx, e2)
Write similar axioms for the function mirror
above.
Proving properties of mirror βοΈ
Does mirror
preserve the value of its input expression? That is, can we prove eval(ctx, mirror(e)) == eval(ctx, e)
?
If yes, write a proof of this fact using structural induction on e
. If not, give a counterexample (a value for e
and ctx
such that the theorem does not hold, then conjecture a theorem that does hold and prove it).
Hint
It does not hold: mirror
changes the value of Minus
nodes: eval(Minus(1, 3))
is -2
, but eval(mirror(Minus(1, 3)))
is 2
. If you attempt a proof, you will get stuck on the inductive step for Minus
:
evaluate(ctx, mirror(Minus(e1, e2))) === evaluate(ctx, Minus(mirror(e2), mirror(e1))) // by MirrorMinus === evaluate(ctx, mirror(e2)) + evaluate(ctx, mirror(e1)) // by EvalMinus === evaluate(ctx, e2) - evaluate(ctx, e1) // by IH1 and IH2 =/= evaluate(ctx, e1) - evaluate(ctx, e2) // Not equal! === evaluate(ctx, Minus(e1, e2)) // by EvalMinus
What does hold, however, is the fact that mirror(mirror(e)) == e
. Functions like mirror
that verify this property are called involutive
.
Now, prove it!
Verification of static analyzers βοΈ
Let’s continue our program proofs journey by looking at a static analyzer. A static analyzer is a program that checks a property of another program.
For example, the following static analyzer checks whether the input expression e
always evaluate to 0:
def zeroExpr(e: Expr): Boolean = e match
case Number(value) => value == 0
case Var(_) => false
case Add(e1, e2) => zeroExpr(e1) && zeroExpr(e2)
case Minus(e1, e2) => zeroExpr(e1) && zeroExpr(e2)
case Mul(e1, e2) => zeroExpr(e1) && zeroExpr(e2)
src/main/scala/trees/Expr.scala
It correctly identifies that Add(Number(0), Mul(Number(0), Number(0)))
is always zero by returning true
, and it returns false
for Add(Number(0), Mul(Number(1), Number(0)))
.
We can write this property formally as P(e) === β ctx. evaluate(ctx, e) == 0
.
A static analyzer is sound if all programs for which it returns true
have the property (in other words, it has no false positives):
- Soundness:
β e. zeroExpr(e) == true β P(e)
.
Or equivalently:β e. Β¬P(e) β zeroExpr(e) == false
It is complete if it identifies all programs that have the property (in other words, there are no false negatives: all programs for which it returns false
do not have the property).
- Completeness:
β e. P(e) β zeroExpr(e) == true
.
Or equivalently:β e. zeroExpr(e) == false β Β¬P(e)
.
-
Write rewriting rules for
zeroExpr
. -
Is
zeroExpr
complete? Is it sound? For each of these properties, provide either a counterexample if the property is wrong, or a proof if it’s correct.
Computer-checked proofs
In previous exercises, you were asked to write proofs about functional code. These proofs allow to mathematically express and verify your beliefs about programs. However, who checks your proofs?
You and your peers may indeed have faced issues where the proof was incorrectly checked or completely invalidated due to a small mistake. Writing proofs can be tedious and error-prone, and correctly grading them even more so!
Further, while writing the proofs, you’re never sure if you’re correct. If only the computer could check the proof for you and provide feedback. This is indeed possible! In this exercise, we’ll see such an interface and attempt to check some proofs we’ve already seen.
The proof syntax is intended to be close to what you would write on paper, but with your IDE providing hints along the way. While you’re writing your proof, you can instead now have it checked by the computer and get some live feedback!
For example, let’s try to prove (Nil ++ Nil).map(single) === Nil
using the
following axioms:
(1) Nil.map(f) === Nil
(2) (x :: xs).map(f) === f(x) ::: xs.map(f)
(3) Nil.flatten === Nil
(4) (xs :: xss).flatten === xs ++ xss.flatten
(5) Nil ++ ys === ys
(6) (x :: xs) ++ ys === x :: (xs ++ ys)
(7) single(x) === x :: Nil
The axioms provided are true for all values of the variables, they are
implicitly universally quantified as before. Let’s say in the paper proof, we
write each step in form x === y by Rule
:
(Nil ++ Nil).map(single)
=== Nil.map(single) by 5
=== Nil by 1
Now it’s time to use the computer to check this proof!
At the top of file MapSingleFlatten.scala
, you can find:
-
declarations of available variables:
val f: T => List[T] = ListFunctionVariable("f") val x: T = Variable("x") val xs: List[T] = ListVariable("xs") val ys: List[T] = ListVariable("ys") val xss: List[List[T]] = ListVariable("xss")
src/main/scala/itp/MapSingleFlatten.scala
-
predefined axioms:
object Axioms: object Map: val NilCase = Axiom(Nil.map(f) === Nil) val ConsCase = Axiom((x :: xs).map(f) === f(x) :: xs.map(f)) object Flatten: val NilCase = Axiom(Nil.flatten === Nil) val ConsCase = Axiom((xs :: xss).flatten === xs ++ xss.flatten) object Append: val NilCase = Axiom(Nil ++ ys === ys) val ConsCase = Axiom((x :: xs) ++ ys === x :: (xs ++ ys)) object Single: val NilCase = Axiom(single(x) === x :: Nil) end Axioms
src/main/scala/itp/MapSingleFlatten.scala
These can be accessed like any other Scala object. A proof looks like:
// example of a theorem applying a single rule
val exampleRuleApplication = Theorem(Nil.map(f) === Nil):
Nil.map(f)
=== Nil ==< Map.NilCase
src/main/scala/itp/MapSingleFlatten.scala
Noted that in each step x === y by Rule
, by
has been replaced by the smooth
==<
operator. If you have ligatures enabled in your IDE, the theorem should
look like this:
You can optionally see instructions on enabling ligatures in VSCode here.
Our previous paper proof on (Nil ++ Nil).map(single) === Nil
can be
translated to Scala as follows:
// example of a theorem applying two rules successively
val multipleRuleApplication = Theorem((Nil ++ Nil).map(single) === Nil):
(Nil ++ Nil).map(single)
=== Nil.map(single) ==< Append.NilCase
=== Nil ==< Map.NilCase
src/main/scala/itp/MapSingleFlatten.scala
You can run the examples with sbt runMain itp.examples
, or just sbt run
and then selecting itp.examples
at the prompt. Running the file should
print:
[info] running (fork) itp.examples
[info] Proved Theorem exampleRuleApplication: (Nil.map(f) === Nil)
...
[info] Proved Theorem multipleRuleApplication: ((Nil ++ Nil).map(single) === Nil)
...
Woohoo π₯³! Your proof now comes with a computer-checked certificate of
correctness. If you alter the proof step a bit to be incorrect, attempting
Nil.map(single) === xss ==< Map.NilCase
:
[info] running (fork) itp.examples
...
[info] java.lang.IllegalArgumentException: requirement failed:
[info] MapSingleFlatten.scala:53 : proof step failed:
[info] LHS: Nil.map(single)
[info] RHS: xss
[info] Cannot be justified with the statement: (Nil.map(single) === Nil)
...
pointing out the equality you were attempting to prove, the rule applied, and the file and line of the failed proof.
While changing the proof, you can use ~runMain itp.examples
in sbt
to
re-run automatically whenever you change the proof.
Starting with Proofs βοΈ
You can run the warm-up exercises as ~runMain itp.startingWithProofs
in sbt
.
Proofs with Axioms
Based on the provided axioms, complete the proof nilAppend
:
val nilAppend = Theorem((x :: Nil) ++ Nil === (x :: Nil)):
(x :: Nil) ++ Nil
???
src/main/scala/itp/MapSingleFlatten.scala
You can look at the examples multipleRuleExample
and usingAssumption
for
more syntax.
Proofs with Assumptions
Based on the provided axioms and assumptions, complete the proof
concatNilFlatten
:
val assumption = xs ++ Nil === Nil ++ xs
val concatNilFlatten = Theorem(
assumption ==> ((xs ++ Nil.flatten) === xs)
):
???
src/main/scala/itp/MapSingleFlatten.scala
What happens if you alter the assumption statement? What if you do not put the assumption in the theorem statement?
Restating and readability
Writing a list like x :: y :: Nil
can certainly be annoying, but since we are
just using Scala, we can define a familiar alias:
// convenient syntax for lists
// eg: List(x, y)
def List[A](xs: A*): List[A] =
if xs.isEmpty then Nil
else xs.head :: List(xs.tail*)
src/main/scala/itp/MapSingleFlatten.scala
To the proof, List(x, y)
and x :: y :: Nil
are exactly the same thing, and
so this does not affect how the proof checker sees our terms at all.
As neat as aliases and notations are most of the time, when mixed in with other terms, they can sometimes become hard to read. For example,
List(x) ++ Nil
=== x :: (Nil ++ Nil) ==< Axiom((x :: xs) ++ ys === x :: (xs ++ ys))
is not immediately obvious to understand, as the axiom uses x :: xs
, while our
statement has List(x)
. So, for both readability and to display the author’s
intention, it is often worthwhile to perform “restate” steps in a proof, writing
the same statement, but in a more understandable form:
List(x) ++ Nil
=== (x :: Nil) ++ Nil ==< ???
=== x :: (Nil ++ Nil) ==< Axiom((x :: xs) ++ ys === x :: (xs ++ ys))
The “restate” step, while not necessary, expands the definition of List(_)
to
allow for better readability for humans. In the end, the goal of a proof is to
ascertain yourself about mathematical statements, and to share those results
with other humans, who may use the proof to then convince themselves. It is
worthwhile to adapt your proof to display your train of thought.
Given that, how do we justify our “restate” step, what goes in the place of
???
. In fact, since we are just writing x === x ==< Rule
, this statement is
true regardless of the rule we provide. To be clear about our intentions, we say
that this statement follows from reflexivity, and define this as Restate
.
// if we are simply rewriting a step, we can say it follows from reflexivity
// note that adding x === x as an axiom does not change anything
// as it is already true
val Restate = Axiom(xs === xs)
src/main/scala/itp/MapSingleFlatten.scala
For example:
// example of how to use Restate (which is just True)
// effectively, x === x always holds!
// we write it as Restate to emphasize our intention
val rewritingTerms = Theorem(xs === xs):
xs
=== xs ==< Restate
src/main/scala/itp/MapSingleFlatten.scala
And we can adapt this into a larger example:
// an example to how we can unfold visual definitions
// to emphasize terms in a proof
val unfoldingDefs = Theorem(List(x) ++ Nil === List(x)):
List(x) ++ Nil
// we are simply rewriting the Scala definition of List(_)
// the proof would work just fine without it,
// but proofs aren't written just for computers,
// there's human readability and convenience to consider!
=== (x :: Nil) ++ Nil ==< Restate
=== x :: (Nil ++ Nil) ==< Append.ConsCase
=== x :: Nil ==< Append.NilCase
=== List(x) ==< Restate
src/main/scala/itp/MapSingleFlatten.scala
Proof of MapSingleFlatten βοΈ
Based on the provided axioms, prove the statement:
xs.map(single).flatten === xs
for all lists xs
. The proof is done in two parts, by induction, same as on
paper.
As before, the proofs can be run and checked with ~runMain itp.theorems
in
sbt
.
Base Case
First, complete the base case without assumptions:
// Base Case
val singleNil = Theorem(Nil.map(single).flatten === Nil):
Nil.map(single).flatten
???
src/main/scala/itp/MapSingleFlatten.scala
Inductive Case
Finally, complete the inductive case with the provided induction hypothesis:
// Induction Hypothesis
val IH = xs.map(single).flatten === xs
// Inductive Case
val singleInduct = Theorem(
IH ==> ((x :: xs).map(single).flatten === (x :: xs))
):
(x :: xs).map(single).flatten
???
src/main/scala/itp/MapSingleFlatten.scala
Background: proof checking
Proof checking is often a surprisingly complicated task. Many tools have been developed to help with this, called proof assistants. As the name suggests, these often provide more information than just checking your proofs, nudging you along the way. Some popular proof assistants are Coq, Lean, and Isabelle. A list of related tools can be found at EuroProofNet/Tools.
Our little proof checking interface is based on an in-house proof assistant
written in Scala, called Lisa. Lisa is
provided with the exercises as a JAR file in lib/
. The syntax provided is a
small subset of Lisa’s proof capabilities, i.e. restricted to equality proofs on
lists. Lisa uses Set Theory as its mathematical foundations, and so does not
have an inherent notion of types at all! The types available to you here are an
imitation of Scala lists that generate expression trees, which are internally
converted to Lisa expressions after erasing some of the type information.
There are many tactics and algorithms involved in expanding the equality proof from a single equality step you may write. Finally, these are reduced to some simple steps understood by Lisa’s core, the kernel. These are the rules of first-order logic as a sequent calculus. Limiting the core of the system to simple rules and building everything else on top of these allows you to be certain that your proofs are checked with confidence at the end!
In the rest of this exercise set, we practice some more proofs in this style.
Associativity of List Concatenation
The following are axioms for that follow from the definition of ++
:
val NilCase = Axiom((Nil ++ xs) === xs)
val ConsCase = Axiom(((x :: xs) ++ ys) === (x :: (xs ++ ys)))
src/main/scala/itp/AppendAssociativity.scala
Using these axioms, use induction to prove the associativity law.
Base Case
val baseCase = Theorem(
(Nil ++ ys) ++ zs === Nil ++ (ys ++ zs)
):
???
src/main/scala/itp/AppendAssociativity.scala
Inductive Case
val IH = (xs ++ ys) ++ zs === xs ++ (ys ++ zs)
val inductiveCase = Theorem(
IH ==>
((x :: xs) ++ (ys ++ zs) === ((x :: xs) ++ ys) ++ zs)
):
???
src/main/scala/itp/AppendAssociativity.scala
Factoring out formulas
Proofs are chains of formulas and the justification of their equivalence. We can, in principle, define our own functions that manipulate formulas. In particular, our associativity condition can be parameterized by the expression that stands for the first list using the following function on expression trees representing formulas.
def assocFor(v: List[T]): PredicateFormula[List[T]] =
(v ++ ys) ++ zs === v ++ (ys ++ zs)
// Theorems for base case and inductive case using assocFor shorthand
???
src/main/scala/itp/AppendAssociativity.scala
Use this abbreviation, rewrite the statements of the base case and inductive case to make them shorter.
Reversing Concatenation
Use the axioms that define reverse
, as well as the previously proven associativity law as an axiom
object Append:
val NilCase = Axiom((Nil ++ xs) === xs)
val ConsCase = Axiom(((x :: xs) ++ ys) === (x :: (xs ++ ys)))
val Assoc = Axiom((xs ++ ys) ++ zs === xs ++ (ys ++ zs))
val NilEnd = Axiom((xs ++ Nil) === xs)
object Reverse:
val NilCase = Axiom(Nil.reverse === Nil)
val ConsCase = Axiom((x :: xs).reverse === xs.reverse ++ List(x))
src/main/scala/itp/ReverseAppend.scala
to show
(xs ++ ys).reverse === (ys.reverse ++ xs.reverse)
We have defined abbreviation List(x)
to stand for x :: Nil
.
Base Case
val baseCase = Theorem(
(Nil ++ ys).reverse === (ys.reverse ++ Nil.reverse)
):
???
src/main/scala/itp/ReverseAppend.scala
Inductive Case
val IH =
???
val inductiveCase = Theorem(
IH ==>
(((x :: xs) ++ ys).reverse
=== ys.reverse ++ (x :: xs).reverse)
):
???
src/main/scala/itp/ReverseAppend.scala
Reversing Reversal
We will next use several previously proven theorems as axioms:
object Append:
val NilCase = Axiom((Nil ++ xs) === xs)
val ConsCase = Axiom(((x :: xs) ++ ys) === (x :: (xs ++ ys)))
val Assoc = Axiom((xs ++ ys) ++ zs === xs ++ (ys ++ zs))
val NilEnd = Axiom((xs ++ Nil) === xs)
object Reverse:
val NilCase = Axiom(Nil.reverse === Nil)
val ConsCase = Axiom((x :: xs).reverse === xs.reverse ++ List(x))
val AppendCase = Axiom((xs ++ ys).reverse === (ys.reverse ++ xs.reverse))
// if we are simply rewriting a step, we can say it follows from reflexivity
// note that adding x === x as an axiom does not change anything
// as it is already true
val Restate = Axiom(xs === xs)
src/main/scala/itp/ReverseReverse.scala
to show that reversing the list twice gives us back the original list:
xs.reverse.reverse === xs
Base Case
val baseCase = Theorem(
Nil.reverse.reverse === Nil
):
???
src/main/scala/itp/ReverseReverse.scala
Inductive Case
val IH = (xs.reverse.reverse === xs)
???
val inductiveCase = Theorem(
IH ==>
((x :: xs).reverse.reverse === (x :: xs))
):
???
src/main/scala/itp/ReverseReverse.scala
Trees with inOrder Traversal βοΈ
Trees are also amenable to equational proofs. We can do proofs by induction
by assuming property of two subtrees, left
, right
and proving that it holds for the overall tree Node(left, x, right)
.
In this exercise, we only show how to use equational proofs on a non-recursive function. However, the function uses list concatenation, so we will need to make use of the previously proven associativity of concatenation.
Specifically, we wish to prove that balancing a binary tree using a rotation operation preserves the order of elements. The rotation we consider is the following transformation:
q p
/ \ / \
p c ====> a q
/ \ / \
a b b c
To compute the list associated with a tree we use the function inOrder
that computes the list associated with the elements of the tree.
The property we wish to prove corresponds to applying inOrder
to both sides of the transformation pictured above.
Node(Node(a, p, b), q, c).inOrder === Node(a, p, Node(b, q, c)).inOrder
The behavior of inOrder
as well as the other relevant axioms are the following:
object InOrder:
val EmptyCase = Axiom(Empty.inOrder === Nil)
val NodeCase = Axiom(Node(left, x, right).inOrder
=== left.inOrder ++ (List(x) ++ right.inOrder))
object Append:
val NilCase = Axiom((Nil ++ xs) === xs)
val ConsCase = Axiom(((x :: xs) ++ ys) === (x :: (xs ++ ys)))
val Assoc = Axiom(((xs ++ ys) ++ zs) === (xs ++ (ys ++ zs)))
src/main/scala/itp/TreeRotation.scala
Direct Proof
Try to prove the theorem in a single proof from left to right. Our solution has 7 steps.
val rotationInvariance = Theorem(
Node(Node(a, p, b), q, c).inOrder === Node(a, p, Node(b, q, c)).inOrder
):
???
src/main/scala/itp/TreeRotation.scala
Proof Using Lemmas
Sometimes it can be difficult to see how to construct a left to right proof directly. Of course, we can write the lines of the proof in any order; once we are done, we can ask the system to check the resulting proof. But we can do even more. We can prove individual equations for parts of the transformation, then combine them into a larger one. Such proof may not be as pretty or even as short, but it may better reflect your problem-solving strategy.
In our example, it makes sense to try to expand, on both sides, the application of inOrder
as much as the input tree reveals. Try to compose your proof using several lemmas, including the following ones. You may need to define some lemmas on your own as well.
val lemma1 = Theorem(
Node(Node(a, p, b), q, c).inOrder
=== (a.inOrder ++ (List(p) ++ b.inOrder)) ++ (List(q) ++ c.inOrder)
):
???
val lemma2 = Theorem(
Node(a, p, Node(b, q, c)).inOrder
=== (a.inOrder ++ (List(p) ++ (b.inOrder ++ (List(q) ++ c.inOrder))))
):
???
???
val rotationInvarianceFromLemmas = Theorem(
Node(Node(a, p, b), q, c).inOrder === Node(a, p, Node(b, q, c)).inOrder
):
???
src/main/scala/itp/TreeRotation.scala