Solver
Linear
Thunderbolt.SchurComplementLinearSolver — Type
SchurComplementLinearSolver(inner_alg::SciMLBase.AbstractLinearAlgorithm)A solver for block systems of the form
\[\begin{bmatrix} A_{11} & A_{12} \\ A_{21} & 0 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}\]
with small zero block of size $N_2 \times N_2$ and invertible $A_{11}$ with size $N_1 \times N_1$. The inner linear solver is responsible to solve for $N_2$ systems of the form $A_{11} z_i = c_i$.
Preconditioners
Thunderbolt.Preconditioners.L1GSPrecBuilder — Type
L1GSPrecBuilder(device::AbstractDevice)A builder for the L1 Gauss-Seidel preconditioner. This struct encapsulates the backend and provides a method to build the preconditioner.
Fields
device::AbstractDevice: The backend used for the preconditioner. More info AbstractDevice.
Thunderbolt.Preconditioners.L1GSPreconditioner — Type
L1GSPreconditioner{Partitioning, SweepPlanType}The ℓ₁ Gauss–Seidel preconditioner is a robust and parallel-friendly preconditioner for sparse matrices.
Algorithm
The L1-GS preconditioner is constructed by dividing the matrix into diagonal blocks nparts:
- Let Ωₖ denote the block with index
k. - For each Ωₖ, we define the following sets:
- $ Ωⁱ := {j ∈ Ωₖ : i ∈ Ωₖ} $ → the set of columns in the diagonal block for row i
- $ Ωⁱₒ := {j ∉ Ωₖ : i ∈ Ωₖ} $ → the remaining "off-diagonal" columns in row i
The preconditioner matrix $M_{ℓ_1}$ is defined as:
\[M_{ℓ_1GS} = M_{HGS} + D^{ℓ_1} \\\]
Where $D^{ℓ_1}$ is a diagonal matrix with entries: $d_{ii}^{ℓ_1} = \sum_{j ∈ Ωⁱₒ} |a_{ij}|$, and $M_{HGS}$ is obtained when the diagonal partitions are chosen to be the Gauss–Seidel sweeps on $A_{kk}$ However, we use another convergant variant, which takes adavantage of the local estimation of θ ( $a_{ii} >= θ * d_{ii}$):
\[M_{ℓ_1GS*} = M_{HGS} + D^{ℓ_1*}, \quad \text{where} \quad d_{ii}^{ℓ_1*} = \begin{cases} 0, & \text{if } a_{ii} \geq \eta d_{ii}^{ℓ_1}; \\ d_{ii}^{ℓ_1}/2, & \text{otherwise.} \end{cases}\]
Fields
partitioning: Encapsulates partitioning data (e.g. nparts, partsize, backend).sweep: The sweep plan containing the cache and diagonal data ($D + D^{ℓ_1}$).
Reference
Example
builder = L1GSPrecBuilder(PolyesterDevice(4))
N = 128 * 16
A = spdiagm(0 => 2 * ones(N), -1 => -ones(N-1), 1 => -ones(N-1))
partsize = 16
# Basic usage
prec = builder(A, partsize)
# With options
prec = builder(A, partsize;
isSymA = true, # set true for symmetric matrices (better performance)
η = 1.5, # threshold parameter
sweep = SymmetricSweep(), # ForwardSweep(), BackwardSweep(), or SymmetricSweep()
cache_strategy = MatrixViewCache() # or PackedBufferCache() for GPU
)Thunderbolt.Preconditioners.ForwardSweep — Type
ForwardSweep <: AbstractSweepForward sweep using lower triangular part: $M = D + L$
Thunderbolt.Preconditioners.BackwardSweep — Type
BackwardSweep <: AbstractSweepBackward sweep using upper triangular part: $M = D + U$
Thunderbolt.Preconditioners.SymmetricSweep — Type
SymmetricSweep <: AbstractSweepSymmetric sweep combining forward and backward: $M = (D + L) D^{-1} (D + U)$ saad2003iterative.
It is essential to clarify the distinction between using Symmetric Gauss-Seidel (SGS) as a smoother versus a preconditioner. While SGS always involves a combination of forward and backward sweeps, the implementation differs subtly depending on the application.
For a symmetric matrix, we define $A := L + D + L^{T}$. For the linear system $Ax = b$, the iteration matrix $G$ satisfies:
\[ x^{(k+1)} = G x^{(k)} + f\]
Consequently, we have the following formulations:
As a Smoother:
The Forward Sweep:
\[ (D + L)x^{(k+1/2)} = b - L^T x^{(k)} → x^{(k+1/2)} = \underbrace{- (D + L)^{-1} L^T}_{G_{fs}} x^{(k)} + \underbrace{(D + L)^{-1} b}_{f_{fs}}\\\]
The Backward Sweep:
\[ (D + L^T)x^{(k+1)} = b - L x^{(k+1/2)} → x^{(k+1)} = \underbrace{- (D + L^T)^{-1} L}_{G_{bs}} x^{(k+1/2)} + \underbrace{(D + L^T)^{-1} b}_{f_{bs}}\]
The Combined Iteration Matrix:
\[ G_{SGS} = G_b G_f = (D + L^T)^{-1} L (D + L)^{-1} L^T\]
As a Preconditioner:
Here, we employ SGS in the context of left preconditioning. That is, we consider the left-preconditioned system:
\[ M^{-1} A x = M^{-1} b\]
and solve for the preconditioned residual $z$.
\[ r_k = b - Ax_k \\ M z_k = r_k ⟺ \text{\texttt{LinearSolve.ldiv!(z, P, r)}} \]
Furthermore, the relation between the preconditioner $M$ and the iteration matrix $G$ is given by:
\[ G = I - M^{-1} A\]
By substituting in the previous equation with $G_{SGS} = G_b G_f$ (defined in the smoother section) and solving for $M$, we obtain:
\[ M_{SGS} = (D + L) D^{-1} (D + L^T)\]
Thunderbolt.Preconditioners.PackedBufferCache — Type
PackedBufferCache <: AbstractCacheStrategyStore the to-be-applied sweeps in a packed dense buffer. Efficient for small partitions (e.g., GPU).
Thunderbolt.Preconditioners.MatrixViewCache — Type
MatrixViewCache <: AbstractCacheStrategyStore the to-be-applied sweeps in a matrix view. Efficient for large partitions (CPU).
Nonlinear
Thunderbolt.NewtonRaphsonSolver — Type
NewtonRaphsonSolver{T}Classical Newton-Raphson solver to solve nonlinear problems of the form F(u) = 0. To use the Newton-Raphson solver you have to dispatch on
Thunderbolt.MultiLevelNewtonRaphsonSolver — Type
MultilevelNewtonRaphsonSolver{T}Multilevel Newton-Raphson solver RabSanHsu:1979:mna for nonlinear problems of the form F(u,v) = 0; G(u,v) = 0. To use the Multilevel solver you have to dispatch on
Time
Missing docstring for BackwardEulerSolver. Check Documenter's build log for details.
Thunderbolt.ForwardEulerCellSolver — Type
Simple forward euler to solve the cell model.
Missing docstring for AdaptiveForwardEulerSubstepper. Check Documenter's build log for details.
Thunderbolt.HomotopyPathSolver — Type
HomotopyPathSolver{IS, T, PFUN}Solve the nonlinear problem F(u,t)=0 with given time increments Δton some interval [t_begin, t_end] where t is some pseudo-time parameter.
Operator Splitting Adaptivity
Thunderbolt.ReactionTangentController — Type
ReactionTangentController{LTG <: OS.LieTrotterGodunov, T <: Real} <: OS.AbstractOperatorSplittingAlgorithmA timestep length controller for LieTrotterGodunov [5–7] operator splitting using the reaction tangent as proposed in [25] The next timestep length is calculated as
\[\sigma\left(R_{\max }\right):=\left(1.0-\frac{1}{1+\exp \left(\left(\sigma_{\mathrm{c}}-R_{\max }\right) \cdot \sigma_{\mathrm{s}}\right)}\right) \cdot\left(\Delta t_{\max }-\Delta t_{\min }\right)+\Delta t_{\min }\]
Fields
ltg::LTG:LieTrotterGodunovalgorithmσ_s::T: steepnessσ_c::T: offset in R axisΔt_bounds::NTuple{2,T}: lower and upper timestep length bounds