Integration
primate
supports a variety of matrix-types of the box, including numpy ndarray
’s, compressed sparse matrices (a lá SciPy), and LinearOperators.
The basic requirements for any operator A
to be used with e.g. the Lanczos
method are:
- A method
A.matvec(input: ndarray) -> ndarray
implementing v \mapsto Av - An attribute
A.shape -> tuple[int, int]
giving the output/input dimensions of A
Here’s an example of a simple operator representing a Diagonal matrix:
import numpy as np
from numpy.typing import ArrayLike
from scipy.sparse.linalg import LinearOperator
class DiagonalOp(LinearOperator):
= None
diag: np.ndarray
def __init__(self, d: ArrayLike, dtype = None):
self.diag = np.array(d)
self.shape = (len(d), len(d))
self.dtype = np.dtype('float32') if dtype is None else dtype
def _matvec(self, x: ArrayLike) -> np.ndarray:
= self.diag * np.ravel(x)
out return out.reshape(x.shape)
Note that by following the subclassing rules of SciPy’s LinearOperator, this class inherits a .matvec()
method and thus satisfies the constraints above.
C++ usage
Similarly, to call any C++ function, such hutch
or lanczos
, simply pass any type with .shape()
and .matvec()
member functions:
class LinOp {
int nrow, ncol;
(int nr, int nc) : nrow(nr), ncol(nc) {}
LinOp
void matvec(const float* input, float* output) const {
... // implementation details
}
void shape() const { return std::make_pair(nrow, ncol); }
}
It’s up to you to ensure shape()
yields the correct size; primate
will supply vectors to input
of size .shape().second
(number of columns) and guarantees the pointer to the output
will be at least shape().first
(number of rows), no more.
To read more about how semantics extend to the C++ side via C++20 concepts, see the C++ integration guide. If you’re using pybind11 and you want to extend primate
’s Python API to work natively with linear operator implemented in C++, see the pybind11 integration guide.