Vouivre Digital

Dependent type infrastructure for machine learning


To build the system you will need:

Then, from the project's root directory, invoke:

$ autoreconf -vif && ./configure && make

Running the tests is done with:

$ make check

Finally, to install the Scheme source code and generated binaries where Guile can find them, run as root:

$ make install

You should now be able to use the (vouivre curry) and (vouivre autodiff) modules in your own source code or in a REPL. Examples on this website use a special language for partial application and type checking called Vouivre. To follow, you can also change the language:

> ,L vouivre

TIP: This language is still finicky especially regarding special syntax and macros. If you only wish to use the automatic differentiation routines, you can avoid it entirely, by remaining in Scheme and avoiding partial applications.

TIP: It is possible to avoid Autotools and Guix by: manually loading the Scheme source in your REPL for the former and by avoiding or hacking the mnist.scm source file for the latter. See HACKING in the projects root directory for more details.


We are trying to develop dependent type infrastructure for machine learning. While this is still largely a work in progress, it is already possible to declare function types, distinguish them from non-functions, partially apply them, and get an error message at compile time (more precisely macro expansion time) if a function is applied improperly.


To create lambda functions that can be partially applied we provide λc:

> ((λc x (λc y x)) 1 2)

TIP: You can still use Scheme's λ to create regular procedures.


Vouivre uses a simple type system for knowing whether an expression can be applied or not and if so to how many arguments. `#f' indicates a regular Scheme expression without a type, `0' represents the type of anything but functions, and a cons cell is the type of curried functions with the car and cdr being either the types of the domain and codomain, respectively, or naturals greater than zero to represent type variables.

> (type-of "a string")

> (type-of 'x)

> (type-of '(λc x x))
(20 . 20)

To declare the type of something you can use the `∷' macro:

> (∷ my-var 0)
> (type-of 'my-var)


There are three ways to define things. The first one, is by using Scheme's `define's, as usual. The second, by using `definec' which is like λc but for named partially applicable functions. In this case a type will be assigned to the defined symbol by the system. The third, `cudefine', can be used to import Scheme routines to the type system. Then, it is necessary to declare a type for the function using `∷':

> (definec (my-∘ g f) (λc x (g (f x))))
> (type-of 'my-∘)
((24 . 25) . ((26 . 24) . (26 . 25)))

> (∷ my-+ (0 0 . 0))
> (cudefine (my-+ x y) (rnrs:+ x y))
> (my-∘ (my-+ 2) (my-+ 1) 10)

TIP: When the cdr of a cons is itself a cons, the printer drops the `.', leading to somewhat confusing type expressions as seen above with the (0 . (0 . 0)) in the type declaration of `my-+' (yes, this web page is generated with the same printer :-). For this reason, `type-of' produces a string for better readability.

Using the Base Module

Some procedures from (guile), (rnrs base), and (srfi srfi-1), as well as a few of our own, already have partially applicable counterparts available in (vouivre base) This module is implemented in the Vouivre extension of Scheme and must be loaded after changing the language. Only then:

> (load "examples/base.scm")

To try it out the simplest way is to change modules:

> ,module (vouivre base)

Now you have access to all functions listed for that module in the documentation.

> (reduce + (map (flip expt 2) '(0 1 2 3)))

Automatic Differentiation

There are two modes of differentiation: forward and reverse. Although they return the same value, the preferred choice for applications involving the change of a loss function with respect to large inputs is the reverse mode. In any case, we will show examples of both. These assume you are in the base module under the vouivre language.

Forward All!

Elementary functions like sine, cosine, exp, expt, etc. were extended to work on scalars and arrays in a straightforward way:

> (exp #(0 1 2))
#(1.0 2.718281828459045 7.38905609893065)
> (expt #2((0 1 2) (3 4 5)) 3)
#2((0 1 8) (27 64 125))


Differentiation put simply is the change of a function's output when changing one of its inputs. Therefore, when the function in question maps one array to another the differentiator must map each element of the input to all elements of the output:

> ((fdiff (flip expt 3)) #2((0 1 2) (3 4 5)))
#4((((0 0 0) (0 0 0)) ((0 3 0) (0 0 0)) ((0 0 12) (0 0 0))) (((0 0 0) (27 0 0)) ((0 0 0) (0 48 0)) ((0 0 0) (0 0 75))))

Above, for example, the element 75 with index (1 2 1 2) states exactly that for the output 125 at index (1 2) in the previous section w.r.t the input 5 at index (1 2).


The file examples/example.scm shows how to use the module to train a neural network using mini-batch gradient descent on parts of the MNIST dataset. The following will download the dataset from the web the first time it's called.

> ,module (guile-user)
> (load "examples/example.scm")

The ground truth labels:

> y
#u8(5 0 4 1 9 2 1 3 1 4)

To show the model's initial classification of the first ten images prior to training:

> (list-tabulate 10 (lambda (i) (classify (array-cell-ref x i))))
(2 2 2 2 2 2 2 2 2 1)

Now we train for ten epochs of five steps each (you should see the loss printed after each epoch):

> (train 10 5)

The resulting classification:

> (list-tabulate 10 (lambda (i) (classify (array-cell-ref x i))))
(5 0 4 1 9 2 1 3 1 4)

Much better!

TIP: With the differentiation engine still lacking some optimizations, it is not realistic to scale the model to learn the full dataset, at this time.