Curryhoward in Java
The CurryHoward Isomorphism (CHI) is, in my humble opinion, one of the more fascinating discoveries in computer science. However it is little known, as introductions are often too theoretical and founded on terse preassumed mathematical knowledge. Practical usage often relies on advanced languages based with dependent types (like Idris or Agda), which in their turn assume a good understanding of functional languages like Haskell. That can pose quite a barrier for the casual bystander who just wants to dip their toes in the water without getting sucked in into the rapids.
So for these people, I will introduce the CHI with examples in Java, giving a small primer how it can be used. Of course, the type system of Java is very limiting compared to the functional languages listed above (there’s a reason they were made!) but I think it will suffice to demonstrate some of the basic ideas.
Wait but why?
If you’re the kind of person that likes to know why you should care about something before getting invested: this section is for you. The people who like interesting stuff for its own right and don’t care about applications can freely skip this.
So why is this connection between math and code important? Well, when a connection is found between two (seemingly) different fields, this allows you to us use tools/tricks from one field on the other one, creating new opportunities. So as a programmer, this allows you to use the power of maths for your programs. A case where this is very useful is verification. Current forms of testing like unit testing allows the creator to eliminate a fair share of bugs, but it does not give complete certainty of the correctness of the code. Using the CHI however, it is possible to create mathematical proofs of correctness, giving absolute certainty. (Although of course this is not easy.)
On the other side, mathematicians can leverage computers in their work via the CHI. Using the correspondence, they can encode their proofs in programs. These programs will only compile if the proof is correct. Thus eliminating the risk that there is a subtle error in the proof that nobody noticed.
There is certainly more to be discussed, like what the difficulties are with this verification, or what other applications are, but this would lead us to far.
The Correspondence
Let’s dive in. The CHI was discovered by observing that there is a very deep connecting (an isomorphism, as mathematicians say) between mathematics and programming. It says that, when a mathematician states some theorem in a mathematical framework, and proves this theorem, and a programmer states some type in a programming language, and creates a program that constructs this type, they are fundamentally doing the same thing.
Math  Programming 

Theorem  Type 
Proof (for a theorem)  Code (to generate a type) 
Checking if a proof is valid  Typechecking 
Mathematical framework (e.g. firstorder logic)  Type system 
A theorem is true if it has a proof (yes, this is assuming soundness and completeness here, but let’s not get into the nittygritty of that). Using the correspondence: a type is true if you can construct a value with this type. This is called inhabitation: a type is inhabited if you can make a value with this type.
Math  Programming 

The theorem is true  The type is inhabited 
CurryHoward in Java
So far the CHI might seem very abstract. If every type is a theorem, what does my List<String>
type mean as a theorem? If every program is a proof, what does my sorting function prove? Similarly, the idea that some type are not inhabited might seem strange for people coming from conventional imperative programming.
It is important here to understand that most mathematics is concerned with “interesting” logics (e.g. firstorder logic or dependent logic). But you could as well construct a logic where everything is true. Of course, this would be a boring logic, and not very useful. Similarly there are type systems that are very boring and not useful (in a mathematical sense), and type systems that are more interesting and expressive (remember, a type system and a logic are the same thing under the CHI). Basically every type system of an imperative language is part of the former category: they are pretty boring from a math point of view.
To get back to the original question: observe that in Java every type is inhabited, because of null
. This means that everything is provable, and hence every type is true. This means the List<String>
type or whichever other type don’t have much meaning. They just are True
.
But let’s pretend that null
does not exist in Java (even the creator said that null was a mistake)
Now let’s try and use what we learned and construct a simple logic and some proofs in Java. We will start with trueity and falsity. The True
proposition should always be true, so we can make it a type that is always constructible:
public class True {
public True() {}
}
Pretty lame, I know. But all beginnings are small. To a create False
, we need a type that cannot be constructed (barring null
). I don’t know any way to do this in Java: if we don’t supply a constructor Java creates a default constructor. (And creating a abstract class does not prevent the possibility that a subclass is made.) So let’s cheat^{1} and just use this:
public final class False {
public False() {
assert false;
}
}
I’ll use asserts when I’m cheating.
Now to more interesting stuff. How could you create a logical and
? A and B
is only true if both A and B are true. So the type for A and B
must only be constructible if both A and B are constructible. In essence this means and
is a pair! Indeed, you can only create a pair if you can also make the two arguments.
public final class And<A, B> {
private A left;
private B right;
public And(final A left, final B right) {
assert left != null && right != null;
this.left = left;
this.right = right;
}
public A projLeft() {
return left;
}
public B projRight() {
return right;
}
}
This is the first nontrivial piece of code, so make sure you understand it. There is a single way of construction and two ways to destruct. This is just like in an actual logic, where there would be one derivation rule to create an and
and two ways to use it.
Next up, the implication. A => B
is true if, when A is true, B is also true. So B is constructible if A is constructible. This is in essence a function. Indeed, a function can create B, given A. This means that our implication will just be wrapper around Function
.
public final class Imply<A, B> {
private final Function<A, B> function;
public Imply(final Function<A,B> function) {
assert function != null;
this.function = function;
}
public B apply(A t) {
return function.apply(t);
}
}
A Proof
With this small toolbox, we are ready to make the first proof. Consider the theorem, If A and B are true, B is true
. Let’s state the theorem in Java:
// Theorem 1: A & B => B
Imply<And<A, B>, B> thm1;
In the comment, I stated the exact same thing, but using symbols and infix instead of prefix.
To prove this type, we need to create a program for this type:
thm1 = new Imply<>(And::projRight);
If the theorem is very basic, the proof is even more so. We simply use one of the two deconstructors, which gives us the proof after wrapping it in the Imply
. Next up I’m going to add a little more logic so we can do more interesting proofs.
More logic
We have two more basic logic constructs left: the or
and the not
. Implementing the former, we make Or<A, B>
constructible by using either a argument of A
or an argument of B
. For And
we needed something that needs both arguments, for Or
one of them suffices. We can implement this using polymorphism:
public interface Or<A, B> {
<C> C caseOf(Function<A, C> leftCase, Function<B, C> rightCase);
}
public class OrLeft<A,B> implements Or<A, B> {
private final A left;
public OrLeft(A left) {
this.left = left;
}
public <C> C caseOf(Function<A, C> leftCase, Function<B, C> rightCase) {
return leftCase.apply(left);
}
}
public class OrRight<A,B> implements Or<A, B>{
private final B right;
public OrRight(B right) {
this.right = right;
}
public <C> C caseOf(Function<A, C> leftCase, Function<B, C> rightCase) {
return rightCase.apply(right);
}
}
Note the symmetry: Or
has 2 constructors and 1 deconstructor, And
has 1 constructor and 2 deconstructors.
Lastly, there is Not
to discuss. This might be the least intuitive of the bunch (pun not intended), which is why I kept it for last. In effect, not A
must be provable only if A is not provable. So our implementation Not<A>
must only be constructible if the type A
isn’t constructible. We can do this by interpreting not A
as A => False
. Indeed, we can never construct False on it’s own, so if A => False
exists (it’s true), A must be false. Similarly if A => False
does not exist (it’s false), this must be because A is true.
public final class Not<A> {
private final Function<A, False> proof;
public Not(final Function<A, False> proof) {
this.proof = proof;
}
public False apply(final A a) {
return proof.apply(a);
}
}
Now we have all the basic elements of your typical logic. Time to make some more proofs!
More proofs
Just like any maths, you need to work on it yourself to really understand it. It’s a good exercise to try and prove these types before looking at my program. Or even better: state your own theorems, convert them to types and prove them.
// Theorem 2: A  A => A
Function<Or<A, A>, A> thm2;
thm2 = given > given.caseOf(id_thm, id_thm);
So for only oneliners. To prove a law of De Morgan we’ll need a bit more:
// Theorem 3: !x  !y => !(x & y)
Imply<Or<Not<X>, Not<Y>> , Not<And<X, Y>>> thm7;
thm3 = new Imply<>((Or<Not<X>, Not<Y>> x) > new Not<>(
((And<X, Y> y) > x.caseOf(
(Not<X> na) > na.apply(y.projLeft()),
(Not<Y> nb) > nb.apply(y.projRight())
))
));
As Java is not exactly great for type theory, you’ll note that these proofs get hard to follow very quick. Specialized languages for theoremproving have ways of dealing with this (interactive provers, holes, tactics), but this problem can remain even then.
Intuitive flaws
Try to prove the following theorem: !(!A) => A
. It seems simple enough, but you’ll quickly realize that it can’t be done. How come?
Even worse, there many other statements that can’t be proven. Famously, the law of exclude the middle is not provable in this logic:
// Theorem 2: A  !A
Or<A, Not<A>> thm4;
thm4 = ? // not provable
This is because the logic we created is not entirely the same as “regular” firstorder logic.
Here we are bearing the consequences of the demand to make everything constructible. For example, when you want to make a proof that there is some number X that has a property Y, you need to actually give X. In regular maths, this is not always the case, you sometimes can give an argument that X definitely must exist, without actually showing what it is. Hence constructible logic is weaker than regular logic. Everything that can be proven in constructible logic is provable in regular logic, but not the other way round.
To get back to the law of exclude the middle: you could only prove it by giving a program that always decides whether A is true or not. Sadly this is not possible, for example, take A as “this Turing machine terminates”. You know for certain that a Turing machine will either terminate or not, but you cannot generally prove which of the two it will be, and hence you can’t give a constructive proof.
References:
Todo
Footnotes:

This would not be acceptable in a real setting:
new False()
would still typecheck. Hence you could prove any false theorem (i.e. the logic isn’t sound). Again, this is again a limitation of Java, but we will keep a blind eye to it just like fornull
. There are other similar problems in the content that follows, e.g. for theOr
you would need to be able to guarantee that you cannot create any other implementation besidesOrLeft
andOrRight
. ↩