# Musings

Here are some random thoughts about Wall.

# Why I wrote Wall

Over the past twenty years, I've worked on projects in C, C++, C#, Java, JavaScript, TypeScript, Swift, Kotlin, Groovy, Objective-C, Haskell, Erlang, Clojure, Elixir, Ruby, F# and other languages. In doing so, I've learned a lot about how static typing can lead to a more pleasurable, bug-free coding experience. At the same time, types never felt natural to me because, in most languages, they require a meta-language on top of the language. Even in languages like Idris that do their best to mix types and terms through dependent types, the type system breaks down when faced with certain recursively-defined dependent types. This is for good reason - System F, on which languages like Haskell and Idris are based, is not decideable, whereas types have to be decided at compile time so that the compiler knows what types to check for.

I created Wall to explore a language where the compiler keeps track of functions' potential domains instead of types. The idea itself is not new: this is just a flavor of memoization, which is arguably the oldest form of computing. The tricky part is that the memoization of a function in a Wall program is not fixed, but rather one of potentially infinite potential memoizations. This is the case, for example, when the domain of a function is dependent on a file or something read form stdin. So, the Wall compiler can be thought of as a push-out of memoizations, meaning that it constructs the domain from the memoized domains of several different versions of a function (the bottom part of a push-out diamond, or the co-equalizing part), all of which are connected by one underlying memoized function (the top part of the push-out diamond, or the "span").

This eliminates the need for a meta-language to describe types, kinds, and higher-kinded types, instead only verifying that a function is never invoked with a value that is potentially outside of its domain. I'm sure that other people have thought of this and given it a name, but because I can't find it, I've coined the term function domain verification, or FDV.

Doing this has required one pretty big compromise. Wall, like Dhall, is not Turing complete because it lacks recursion. Recursive functions in Wall are represented internally by a tuple of a function and its energy that, when evaluated, returns a tuple with the same function and energy diminished by one unit. A Wall program, then, stops deterministally and returns bottom (⊥, or the "middle finger of programming") when it no longer contains any energy. This is not unlike Haskell's bottom. Except that, instead of a theoretical flipping-off, Wall serves you an imminently real ⊥ and then cowardly refuses to continue computing. The trick with Wall programs, then, is to design them such that the energy is always enough given how they will be used. This requires a level of additional orchestration: for example, a webapp deployed on a Kubernetes cluster with autoscalaing. While this may seem a bit unclean, the reality of modern computing is that most programs are ephemeral and short lived. Wall embraces the nature of these programs, building their short-lived nature into the compiler itself and using that to create strong type checking.