To main content

Automatic differentiation

Automatic differentiation gives derivatives and gradients of numerical programs with respect to the input parameters. This is essential when a numerical model is to be fitted to data, updated to new observations and when the model controls are to be optimized for a desired outcome.

Contact persons

Numerical models are widely used to estimate response variables given parameters and operational controls. If gradients with respect to these parameters are needed, a conventionally written program will either have to be run many times with small perturbations to obtain numerical derivatives, or derivatives will have to be painstakingly derived and implemented manually.

Automatic differentiation (AD) refers to a class of mathematical, programming and compiler techniques that makes it possible to obtain derivatives and gradients from a computer program. Common techniques include forward- and reverse mode differentiation, source code transformation and adjoint methods.

We use automatic differentiation extensively in our work in topics ranging from computational electrochemistry, partial differential equations, porous media flow, building site optimization and biochar production. Languages include C++, MATLAB, Julia and Python and we work with both in-house and open source libraries that perform automatic differentiation. Picking the right technique for a specific application is essential, as there are many pitfalls in differentiating a complex program made up of thousands of lines of code.