eddymotion.model.gpr module¶
Derivations from scikit-learn for Gaussian Processes.
- eddymotion.model.gpr.BOUNDS_A: tuple[float, float] = (0.1, 2.35)¶
The limits for the parameter a (angular distance in rad).
- eddymotion.model.gpr.BOUNDS_LAMBDA: tuple[float, float] = (0.001, 1000)¶
The limits for the parameter λ (signal scaling factor).
- eddymotion.model.gpr.CONFIGURABLE_OPTIONS: Mapping[str, set] = {'CG': {'disp', 'gtol', 'maxiter'}, 'Nelder-Mead': {'adaptive', 'disp', 'fatol', 'maxiter'}}¶
A mapping from optimizer names to the option set they allow.
Add new optimizers to this list, including what options may be configured.
- class eddymotion.model.gpr.EddyMotionGPR(*args: Any, **kwargs: Any)[source]¶
Bases:
GaussianProcessRegressor
A GP regressor specialized for eddymotion.
This specialization of the default GP regressor is created to allow the following extended behaviors:
Pacify Scikit-learn’s estimator parameter checker to allow optimizers given by name (as a string) other than the default BFGS.
Enable custom options of optimizers. See
minimize
for the available options. Please note that only a few of them are currently supported.
In the future, this specialization would be the right place for hyperparameter optimization using cross-validation and such.
In principle, Scikit-Learn’s implementation normalizes the training data as in [Andersson15] (see FSL’s souce code). From their paper (p. 167, end of first column):
Typically one just substracts the mean (\(\bar{\mathbf{f}}\)) from \(\mathbf{f}\) and then add it back to \(f^{*}\), which is analogous to what is often done in “traditional” regression.
Finally, the parameter \(\sigma^2\) maps on to Scikit-learn’s
alpha
of the regressor. Because it is not a parameter of the kernel, hyperparameter selection through gradient-descent with analytical gradient calculations would not work (the derivative of the kernel w.r.t. alpha is zero).I believe this is overlooked in [Andersson15], or they actually did not use analytical gradient-descent:
A note on optimisation
It is suggested, for example in Rasmussen and Williams (2006), that an optimisation method that uses derivative information should be used when finding the hyperparameters that maximise Eq. (12). The reason for that is that such methods typically use fewer steps, and when the cost of calculating the derivatives is small/moderate compared to calculating the functions itself (as is the case for Eq. (12)) then execution time can be much shorter. However, we found that for the multi-shell case a heuristic optimisation method such as the Nelder-Mead simplex method (Nelder and Mead, 1965) was frequently better at avoiding local maxima. Hence, that was the method we used for all optimisations in the present paper.
Multi-shell regression (TODO). For multi-shell modeling, the kernel \(k(\textbf{x}, \textbf{x'})\) is updated following Eq. (14) in [Andersson15].
\[k(\textbf{x}, \textbf{x'}) = C_{\theta}(\mathbf{g}, \mathbf{g'}; a) C_{b}(|b - b'|; \ell)\]and \(C_{b}\) is based the log of the b-values ratio, a measure of distance along the b-direction, according to Eq. (15) given by:
\[C_{b}(b, b'; \ell) = \exp\left( - \frac{(\log b - \log b')^2}{2 \ell^2} \right),\]\(b\) and \(b'\) being the b-values, and \(\mathbf{g}\) and \(\mathbf{g'}\) the unit diffusion-encoding gradient unit vectors of the shells; and \({a, \ell}\) some hyperparameters.
The full GP regression kernel \(\mathbf{K}\) is then updated for a 2-shell case as follows (Eq. (16) in [Andersson15]):
\[\begin{split}\begin{equation} \mathbf{K} = \left[ \begin{matrix} \lambda C_{\theta}(\theta (\mathbf{G}_{1}); a) + \sigma_{1}^{2} \mathbf{I} & \lambda C_{\theta}(\theta (\mathbf{G}_{2}, \mathbf{G}_{1}); a) C_{b}(b_{2}, b_{1}; \ell) \\ \lambda C_{\theta}(\theta (\mathbf{G}_{1}, \mathbf{G}_{2}); a) C_{b}(b_{1}, b_{2}; \ell) & \lambda C_{\theta}(\theta (\mathbf{G}_{2}); a) + \sigma_{2}^{2} \mathbf{I} \\ \end{matrix} \right] \end{equation}\end{split}\]References
- class eddymotion.model.gpr.ExponentialKriging(*args: Any, **kwargs: Any)[source]¶
Bases:
Kernel
A scikit-learn’s kernel for DWI signals.
- diag(X: numpy.ndarray) numpy.ndarray [source]¶
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
- Parameters:
X (ndarray of shape (n_samples_X, n_features)) – Left argument of the returned kernel k(X, Y)
- Returns:
K_diag – Diagonal of kernel k(X, X)
- Return type:
ndarray of shape (n_samples_X,)
- property hyperparameter_a: sklearn.gaussian_process.kernels.Hyperparameter¶
- property hyperparameter_beta_l: sklearn.gaussian_process.kernels.Hyperparameter¶
- eddymotion.model.gpr.LBFGS_CONFIGURABLE_OPTIONS = {'disp', 'ftol', 'gtol', 'maxiter'}¶
The set of extended options that can be set on the default BFGS.
- eddymotion.model.gpr.NONGRADIENT_METHODS = {'Nelder-Mead'}¶
A set of gradients that do not allow analytical gradients.
- eddymotion.model.gpr.SUPPORTED_OPTIMIZERS = {'CG', 'Nelder-Mead', 'fmin_l_bfgs_b'}¶
A set of supported optimizers (automatically created).
- class eddymotion.model.gpr.SphericalKriging(*args: Any, **kwargs: Any)[source]¶
Bases:
Kernel
A scikit-learn’s kernel for DWI signals.
- diag(X: numpy.ndarray) numpy.ndarray [source]¶
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
- Parameters:
X (ndarray of shape (n_samples_X, n_features)) – Left argument of the returned kernel k(X, Y)
- Returns:
K_diag – Diagonal of kernel k(X, X)
- Return type:
ndarray of shape (n_samples_X,)
- property hyperparameter_a: sklearn.gaussian_process.kernels.Hyperparameter¶
- property hyperparameter_beta_l: sklearn.gaussian_process.kernels.Hyperparameter¶
- eddymotion.model.gpr.compute_pairwise_angles(X: np.ndarray, Y: np.ndarray | None = None, closest_polarity: bool = True, dense_output: bool = True) np.ndarray [source]¶
Compute pairwise angles across diffusion gradient encoding directions.
Following [Andersson15], it computes the smallest of the angles between each pair if
closest_polarity
isTrue
, i.e.,\[\theta(\mathbf{g}, \mathbf{g'}) = \arccos(|\langle \mathbf{g}, \mathbf{g'} \rangle|)\]- Parameters:
X ({array-like, sparse matrix} of shape (n_samples_X, n_features)) – Input data.
Y ({array-like, sparse matrix} of shape (n_samples_Y, n_features), optional) – Input data. If
None
, the output will be the pairwise similarities between all samples inX
.dense_output (
bool
, optional) – Whether to return dense output even when the input is sparse. IfFalse
, the output is sparse if both input arrays are sparse.closest_polarity (
bool
, optional) –True
to consider the smallest of the two angles between the crossinglines resulting from reversing each vector pair.
- Returns:
Pairwise angles across diffusion gradient encoding directions.
- Return type:
Examples
>>> X = np.asarray([(1.0, -1.0), (0.0, 0.0), (0.0, 0.0)]).T >>> compute_pairwise_angles(X, closest_polarity=False)[0, 1] 3.1415... >>> X = np.asarray([(1.0, -1.0), (0.0, 0.0), (0.0, 0.0)]).T >>> Y = np.asarray([(1.0, -1.0), (0.0, 0.0), (0.0, 0.0)]).T >>> compute_pairwise_angles(X, Y, closest_polarity=False)[0, 1] 3.1415... >>> X = np.asarray([(1.0, -1.0), (0.0, 0.0), (0.0, 0.0)]).T >>> compute_pairwise_angles(X)[0, 1] 0.0
- eddymotion.model.gpr.exponential_covariance(theta: numpy.ndarray, a: float) numpy.ndarray [source]¶
Compute the exponential covariance for given distances and scale parameter.
Implements \(C_{\theta}\), following Eq. (9) in [Andersson15]:
\[\begin{equation} C(\theta) = e^{-\theta/a} \,\, \text{for} \, 0 \leq \theta \leq \pi, \end{equation}\]\(\theta\) being computed as:
\[\theta(\mathbf{g}, \mathbf{g'}) = \arccos(|\langle \mathbf{g}, \mathbf{g'} \rangle|)\]
- eddymotion.model.gpr.spherical_covariance(theta: numpy.ndarray, a: float) numpy.ndarray [source]¶
Compute the spherical covariance for given distances and scale parameter.
Implements \(C_{\theta}\), following Eq. (10) in [Andersson15]:
\[\begin{split}\begin{equation} C(\theta) = \begin{cases} 1 - \frac{3 \theta}{2 a} + \frac{\theta^3}{2 a^3} & \textnormal{if} \; \theta \leq a \\ 0 & \textnormal{if} \; \theta > a \end{cases} \end{equation}\end{split}\]\(\theta\) being computed as:
\[\theta(\mathbf{g}, \mathbf{g'}) = \arccos(|\langle \mathbf{g}, \mathbf{g'} \rangle|)\]