backprop: Heterogeneous automatic differentation

[ bsd3, library, math ] [ Propose Tags ] [ Report a vulnerability ]

Write your functions to compute your result, and the library will automatically generate functions to compute your gradient.

Implements heterogeneous reverse-mode automatic differentiation, commonly known as "backpropagation".

See https://backprop.jle.im for official introduction and documentation.


[Skip to Readme]

Flags

Automatic Flags
NameDescriptionDefault
vinyl_0_14Enabled

Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info

Downloads

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.0.1.0, 0.0.2.0, 0.0.3.0, 0.1.0.0, 0.1.1.0, 0.1.2.0, 0.1.3.0, 0.1.4.0, 0.1.5.0, 0.1.5.1, 0.1.5.2, 0.2.0.0, 0.2.1.0, 0.2.2.0, 0.2.3.0, 0.2.4.0, 0.2.5.0, 0.2.6.0, 0.2.6.1, 0.2.6.2, 0.2.6.3, 0.2.6.4, 0.2.6.5 (info)
Change log CHANGELOG.md
Dependencies base (>=4.7 && <5), containers, deepseq, microlens, primitive, reflection, transformers, vector, vinyl (>=0.9.1 && <0.14 || >=0.14.2) [details]
Tested with ghc >=8.4
License BSD-3-Clause
Copyright (c) Justin Le 2018
Author Justin Le
Maintainer justin@jle.im
Category Math
Home page https://backprop.jle.im
Bug tracker https://github.com/mstksg/backprop/issues
Source repo head: git clone https://github.com/mstksg/backprop
Uploaded by jle at 2023-07-23T21:49:33Z
Distributions LTSHaskell:0.2.6.5, NixOS:0.2.6.5, Stackage:0.2.6.5
Reverse Dependencies 8 direct, 5 indirect [details]
Downloads 13861 total (49 in the last 30 days)
Rating 2.25 (votes: 2) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs uploaded by user
Build status unknown [no reports yet]

Readme for backprop-0.2.6.5

[back to package description]

backprop

backprop on Hackage backprop on Stackage LTS 11 backprop on Stackage Nightly Build Status

[![Join the chat at https://gitter.im/haskell-backprop/Lobby](https://badges.gitter.im/haskell-backprop/Lobby.svg)](https://gitter.im/haskell-backprop/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)

Documentation and Walkthrough

Automatic heterogeneous back-propagation.

Write your functions to compute your result, and the library will automatically generate functions to compute your gradient.

Differs from ad by offering full heterogeneity -- each intermediate step and the resulting value can have different types (matrices, vectors, scalars, lists, etc.).

Useful for applications in differentiable programming and deep learning for creating and training numerical models, especially as described in this blog post on a purely functional typed approach to trainable models. Overall, intended for the implementation of gradient descent and other numeric optimization techniques. Comparable to the python library autograd.

Currently up on hackage, with haddock documentation! However, a proper library introduction and usage tutorial is available here. See also my introductory blog post. You can also find help or support on the gitter channel.

If you want to provide backprop for users of your library, see this guide to equipping your library with backprop.

MNIST Digit Classifier Example

My blog post introduces the concepts in this library in the context of training a handwritten digit classifier. I recommend reading that first.

There are some literate haskell examples in the source, though (rendered as pdf here), which can be built (if stack is installed) using:

$ ./Build.hs exe

There is a follow-up tutorial on using the library with more advanced types, with extensible neural networks a la this blog post, available as literate haskell and also rendered as a PDF.

Brief example

(This is a really brief version of the documentation walkthrough and my blog post)

The quick example below describes the running of a neural network with one hidden layer to calculate its squared error with respect to target targ, which is parameterized by two weight matrices and two bias vectors. Vector/matrix types are from the hmatrix package.

Let's make a data type to store our parameters, with convenient accessors using lens:

import Numeric.LinearAlgebra.Static.Backprop

data Network = Net { _weight1 :: L 20 100
                   , _bias1   :: R 20
                   , _weight2 :: L  5  20
                   , _bias2   :: R  5
                   }

makeLenses ''Network

(R n is an n-length vector, L m n is an m-by-n matrix, etc., #> is matrix-vector multiplication)

"Running" a network on an input vector might look like this:

runNet net x = z
  where
    y = logistic $ (net ^^. weight1) #> x + (net ^^. bias1)
    z = logistic $ (net ^^. weight2) #> y + (net ^^. bias2)

logistic :: Floating a => a -> a
logistic x = 1 / (1 + exp (-x))

And that's it! neuralNet is now backpropagatable!

We can "run" it using evalBP:

evalBP2 runNet :: Network -> R 100 -> R 5

If we write a function to compute errors:

squaredError target output = error `dot` error
  where
    error = target - output

we can "test" our networks:

netError target input net = squaredError (auto target)
                                         (runNet net (auto input))

This can be run, again:

evalBP (netError myTarget myVector) :: Network -> Double

Now, we just wrote a normal function to compute the error of our network. With the backprop library, we now also have a way to compute the gradient, as well!

gradBP (netError myTarget myVector) :: Network -> Network

Now, we can perform gradient descent!

gradDescent
    :: R 100
    -> R 5
    -> Network
    -> Network
gradDescent x targ n0 = n0 - 0.1 * gradient
  where
    gradient = gradBP (netError targ x) n0

Ta dah! We were able to compute the gradient of our error function, just by only saying how to compute the error itself.

For a more fleshed out example, see the documentaiton, my blog post and the MNIST tutorial (also rendered as a pdf)

Benchmarks and Performance

Here are some basic benchmarks comparing the library's automatic differentiation process to "manual" differentiation by hand. When using the MNIST tutorial as an example:

benchmarks

Here we compare:

  1. "Manual" differentiation of a 784 x 300 x 100 x 10 fully-connected feed-forward ANN.
  2. Automatic differentiation using backprop and the lens-based accessor interface
  3. Automatic differentiation using backprop and the "higher-kinded data"-based pattern matching interface
  4. A hybrid approach that manually provides gradients for individual layers but uses automatic differentiation for chaining the layers together.

We can see that simply running the network and functions (using evalBP) incurs virtually zero overhead. This means that library authors could actually export only backprop-lifted functions, and users would be able to use them without losing any performance.

As for computing gradients, there exists some associated overhead, from three main sources. Of these, the building of the computational graph and the Wengert Tape wind up being negligible. For more information, see a detailed look at performance, overhead, and optimization techniques in the documentation.

Note that the manual and hybrid modes almost overlap in the range of their random variances.

Comparisons

backprop can be compared and contrasted to many other similar libraries with some overlap:

  1. The ad library (and variants like diffhask) support automatic differentiation, but only for homogeneous/monomorphic situations. All values in a computation must be of the same type --- so, your computation might be the manipulation of Doubles through a Double -> Double function.

    backprop allows you to mix matrices, vectors, doubles, integers, and even key-value maps as a part of your computation, and they will all be backpropagated properly with the help of the Backprop typeclass.

  2. The autograd library is a very close equivalent to backprop, implemented in Python for Python applications. The difference between backprop and autograd is mostly the difference between Haskell and Python --- static types with type inference, purity, etc.

  3. There is a link between backprop and deep learning/neural network libraries like tensorflow, caffe, and theano, which all all support some form of heterogeneous automatic differentiation. Haskell libraries doing similar things include grenade.

    These are all frameworks for working with neural networks or other gradient-based optimizations --- they include things like built-in optimizers, methods to automate training data, built-in models to use out of the box. backprop could be used as a part of such a framework, like I described in my A Purely Functional Typed Approach to Trainable Models blog series; however, the backprop library itself does not provide any built in models or optimizers or automated data processing pipelines.

See documentation for a more detailed look.

Todo

  1. Benchmark against competing back-propagation libraries like ad, and auto-differentiating tensor libraries like grenade

  2. Write tests!

  3. Explore opportunities for parallelization. There are some naive ways of directly parallelizing right now, but potential overhead should be investigated.

  4. Some open questions:

    a. Is it possible to support constructors with existential types?

    b. How to support "monadic" operations that depend on results of previous operations? (ApBP already exists for situations that don't)

    c. What needs to be done to allow us to automatically do second, third-order differentiation, as well? This might be useful for certain ODE solvers which rely on second order gradients and hessians.