Skip to content

XAD Fast, easy automatic differentiation in C++ and Python

XAD is a high-performance C++ automatic differentiation library designed for large-scale, performance-critical systems.

It provides forward and adjoint (reverse) mode automatic differentiation via operator overloading, with a strong focus on:

  • Low runtime overhead.
  • Minimal memory footprint.
  • Straightforward integration into existing codebases.

For Monte Carlo and other repetitive workloads, XAD also offers optional JIT backend support, enabling record-once / replay-many execution for an additional performance boost.

Latest Release: v2.0.0
Adouble x0 = 1.3;
Adouble x1 = 5.2;  
tape.registerInput(x0); 
tape.registerInput(x1);
tape.newRecording(); 
Adouble y = func(x0, x1);
tape.registerOutput(y);
derivative(y) = 1.0;
tape.computeAdjoints();
cout << "dy/dx0=" << derivative(x0) << "\n"
     << "dy/dx1=" << derivative(x1) << "\n";
x0 = Real(1.3)
x1 = Real(5.2)
tape.registerInput(x0) 
tape.registerInput(x1)
tape.newRecording() 
y = func(x0, x1)
tape.registerOutput(y)
y.derivative = 1.0
tape.computeAdjoints();
print(f"dy/dx0={x0.derivative}")
print(f"dy/dx1={x1.derivative}")

Automatic Differentiation

Automatic differentiation (also called algorithmic differentiation) is a set of techniques for calculating partial derivatives of functions specified as computer programs. Since every program execution is always composed of a sequence of simple operations with known derivatives (arithmetics and mathematical functions like sin, exp, log, etc.), the chain rule can be applied repeatedly to calculate partial derivatives automatically. See automatic differentiation mathematical background for more details.

Key Features

Any Order, Mode, Precision

Calculate derivatives in forward and adjoint modes, for any order – both in single and double precision.

Checkpointing

Manage tape memory and achieve higher performance via checkpointing.

External Functions

Integrate functions from external libraries into the AD tape or manually optimise parts of the differentiated code.

Vector Modes

Compute multiple derivatives at once.

Eigen Support

Works with the popular linear algebra library Eigen.

JIT Backend Support (Optional)

Infrastructure for pluggable JIT backends, enabling record-once/replay-many workflows — with or without automatic differentiation.