Types & Data structures
Nonlinear eigenvalue problems in NEP-PACK are represented by objects of the type NEP. Each NEP-object needs to provide compute functions as we describe in the manual page on compute functions. Efficient compute functions are already implemented for many common and several general types.
In the section specific types below, we list a number of common classes. As a user, first see if your problem fits to one of those classes, as NEP-PACK has very efficient compute functions for these classes. If your NEP does not fit into any of the specific types, we recommend that a user tries to specify the problem as an SPMF_NEP, which is described in the section general types. If your problem can be phrased as a sum of two specific or general types, it is recommended that you use the SumNEP-type. NEP-PACK also supports efficient computation with low-rank NEPs via the LowRankFactorizedNEP.
If your NEP is not easily expressed as an SPMF_NEP, you may want to use the helper types. The data types associated with compact certain pencils are also supported, as described in the CORK data types.
The types also have a number of associated operations and transformation functions. The following example illustrates how you can resample a NEP (by interpolation with a Chebyshev polynomial basis in Chebyshev points provided by the ChebPEP constructor) and apply a NEP-solver which requires many function evaluations, in this case contour_beyn. The two-stage solution approach is much more efficient.
julia> nep_bem=nep_gallery("bem_fichera");
julia> cheb_nep=ChebPEP(nep_bem,20,0,10); # resample the NEP with 20 cheb points
32.651313 seconds (263.16 M allocations: 36.279 GiB, 17.19% gc time)
julia> @time (λ1,v1)=contour_beyn(nep_bem,radius=[5 0.2],σ=5.0, N=100,k=10,);
180.329069 seconds (1.39 G allocations: 183.462 GiB, 13.01% gc time)
julia> @time (λ2,v2)=contour_beyn(cheb_nep,radius=[5 0.2],σ=5.0, N=100,k=10,);
4.319376 seconds (362.34 k allocations: 8.856 GiB, 12.42% gc time)Note that running the contour integral method on the cheb_nep is much faster, even if we take into account that the resampling takes some computational effort. The computed solutions are very similar
julia> λ1
2-element Array{Complex{Float64},1}:
6.450968052414575 - 4.819767260258272e-5im
8.105873440358572 - 0.00012794471501522612im
julia> λ2
2-element Array{Complex{Float64},1}:
6.450968052984224 - 4.819762104884087e-5im
8.105873439472735 - 0.0001279450670266529imMoreover, if we want a very accurate solution, we can run a locally convergence iterative method on the original problem. It converges in very few iterations:
julia> (λ2_1,v1_1)=quasinewton(nep_bem,λ=λ2[1], v=v2[:,1],logger=1);
Precomputing linsolver
iter 1 err:3.638530108313503e-12 λ=6.450968052984224 - 4.819762104884087e-5im
iter 2 err:1.2789912958165988e-14 λ=6.450968052419756 - 4.819768321350077e-5im
julia> (λ2_2,v1_2)=quasinewton(nep_bem,λ=λ2[2], v=v2[:,2],logger=1)
Precomputing linsolver
iter 1 err:3.4824421200567996e-12 λ=8.105873439472735 - 0.0001279450670266529im
iter 2 err:2.05407750614131e-14 λ=8.105873440343123 - 0.00012794469925178411imThe use of of Chebyshev interpolation in combination with the boundary element method (but with a companion linearization approach) was presented in Effenberger and Kressner. "Chebyshev interpolation for nonlinear eigenvalue problems." BIT Numerical Mathematics 52.4 (2012): 933-951. See also the tutorial on boundary element method.
Specific types
type DEP <: AbstractSPMF
function DEP(AA::Vector{AbstractMatrix} [,tauv::Vector=[0,1.0]])A DEP (Delay Eigenvalue problem) is a problem defined by the sum
where all of the matrices are of size $n×n$. This type of NEP describes the stability of time-delay systems.
The construction takes the system matrices $A_i$, and tauv is a vector of the values $τ_i$.
Example:
julia> A0=randn(3,3); A1=randn(3,3);
julia> tauv=[0,0.2] # Vector with delays
julia> dep=DEP([A0,A1],tauv)
julia> λ=3.0;
julia> M1=compute_Mder(dep,λ)
julia> M2=-λ*I+A0+A1*exp(-tauv[2]*λ)
julia> norm(M1-M2)
0.0There are two types to represent PEPs natively in NEP-PACK. You can use a monomial basis with PEP or a Chebyshev basis with ChebPEP.
struct PEP <: AbstractSPMF
function PEP(AA::Vector{AbstractMatrix})The type PEP defines a polynomial eigenvalue problem via its monomial coefficients. A polynomial eigenvalue problem (PEP) is defined by the sum the
where $i = 0,1,2,$, and all of the matrices are of size $n×n$. The vector AA contains $A_1,...$.
Example
julia> A0=[1.0 3; 4 5]; A1=A0.+one(2); A2=ones(2,2);
julia> pep=PEP([A0,A1,A2])
julia> compute_Mder(pep,3)-(A0+A1*3+A2*9)
2×2 Array{Float64,2}:
0.0 0.0
0.0 0.0See also polyeig, companion, ChebPEP, interpolate.
ChebPEP(orgnep::NEP,k,[a=-1,[b=1]] [,cosine_formula_cutoff=5])The type ChebPEP<:AbstractSPMF represents a polynomial function where the function is stored using a Chebyshev basis scaled to the interval [a,b], i.e.,
where $T_i$ are the scaled and shifted Chebyshev polynomials.
The constructor ChebPEP takes nep::NEP as an input and interpolates this NEP in k Chebyshev nodes, resulting in a polynomial of degree k-1, represented by its coefficients in the Chebyshev basis. Interpolation in Chebyshev nodes and representation with Chebyshev basis, is known to have attractive approximation properties, as well as robustness with respect to round-off errors.
The kwarg cosine_formula_cutoff decides how the Chebyshev polynomials should be computed. For larger degrees, it is better to use the cosine formula, whereas for low degrees the explicit monomial expression is more efficient. The explicit monomial expression will be used for degrees lower than cosine_formula_cutoff.
Example:
julia> nep=nep_gallery("dep0");
julia> chebpep=ChebPEP(nep,9);
julia> using LinearAlgebra;
julia> norm(compute_Mder(nep,0.3)-compute_Mder(chebpep,0.3))
1.2881862971045282e-8
julia> chebpep=ChebPEP(nep,19); # Better interpolation
julia> norm(compute_Mder(nep,0.3)-compute_Mder(chebpep,0.3))
2.0312004517316714e-15REP
The Rational Eigenvalue Problem is described by:
NonlinearEigenproblems.NEPTypes.REP — Function.function REP(A,roots,poles)A REP-call creates a rational eigenvalue problem. The REP is defined by the sum $Σ_i A_i s_i(λ)/q_i(λ)$, where i = 0,1,2,..., all of the matrices are of size $n×n$ and $s_i$ and $q_i$ are polynomials. The constructor takes the roots and poles as input of polynomials with normalized highest coefficient. The NEP is defined as
where p has the roots roots and q has the roots poles.
Example
julia> A0=[1 2; 3 4]; A1=[3 4; 5 6];
julia> nep=REP([A0,A1],[1,3], [4,5,6]);
julia> compute_Mder(nep,3)
2×2 Array{Float64,2}:
Inf Inf
Inf Inf
julia> (λ,x)=quasinewton(nep,v=[1;0])
(-0.3689603779201249 + 0.0im, Complex{Float64}[-2.51824+0.0im, 1.71283+0.0im])
julia> -λ*x+A0*x+A1*x*(λ-1)*(λ-3)/((λ-4)*(λ-5)*(λ-6))
2-element Array{Complex{Float64},1}:
-2.5055998942313806e-13 + 0.0im
1.318944953254686e-13 + 0.0imGeneral types
The basic class is the abstract class NEP which represents a NEP. All other defined NEPs should inherit from NEP, or from a more specialized version; see, e.g., ProjectableNEP or AbstractSPMF.
abstract NEPA NEP object represents a nonlinear eigenvalue problem. All NEPs should implement
size(nep::NEP,d)and at least one of the following
- M =
compute_Mder(nep::NEP,λ::Number,i::Integer=0) - V =
compute_Mlincomb(nep::NEP,λ::Number,V::AbstractVecOrMat,a::Vector)(orcompute_Mlincomb!) - MM =
compute_MM(nep::NEP,S,V)
Below we list the most common types built-in to NEP-PACK, and further down how you can access the NEP. However, the structure is made for extendability, and hence it is possible for you to extend with your own class of NEPs.
SPMF
One of the most common problem types is the SPMF_NEP. SPMF is short for Sum of Products of Matrices and Functions and the NEP is described by
The constructor of the SPMF_NEP, takes the the matrices and the functions, but also a number of other (optional) parameters which may increase performance or preserve underlying types.
struct SPMF_NEP{T<:AbstractMatrix,Ftype} <: AbstractSPMF{T}
function SPMF_NEP(AA, fii [,check_consistency=true] [,Schur_fact = false]
[,align_sparsity_patterns = false] [,Ftype=ComplexF64])An SPMF_NEP is a NEP defined by a Sum of Products of Matrices and Functions, i.e.,
All of the matrices $A_0,...$ are of size $n×n$ and $f_i$ are a functions. The functions $f_i$ must be defined for matrices in the standard matrix function sense. The constructor creates a SPMF_NEP consisting of matrices AA and functions fii.
Parameters
AAis aVectorof matrices. The matrices have to be of the same type. If you need a NEP with different types you can useSumNEPto construct a sum of twoSPMF_NEP.fiiis aVectorof functions. Each function takes one parameterS. The functions must be available both as a scalar valid function and a matrix function. IfSis a square matrix,fii[k](S)musst also be a square matrix. IfSis a scalarfii[k](S)is a scalar.check_consistency(defaulttrue) determines if we should initiate by running tests to verify that thefiisatisfies the conditions that every function is valid both for matrices and scalars. This is done by using@code_typedand the functions need to be type-stable in that sense.align_sparsity_patterns(defaultfalse) has effect only for sparse matrices (SparseMatrixCSC). Ifalign_sparsity_patterns=truetheSparseMatrixCSCmatrices will be replaced by equivalentSparseMatrixCSCmatrices where thecolptrandrowvalare identical. This increases the speed of some functions, e.g.,compute_Mder. Ifalign_sparsity_patterns=truethe matrices in the NEP should be considered read only. If the sparsity patterns are completely or mostly distinct, it may be more efficient to set this flag to false.Ftype(defaultComplexF64) determines an underlying type of the functions. The output of any function should be "smaller" than the promoted type of the input andFtype. More precisely, ifF=fii[k], then the type logic is as followseltype(F(λ))=promote_type(eltype(λ),Ftype).Schur_fact(defaultfalse) determines if thecompute_MMfunction should triangularize the matrix before carrying out the computation. This can be faster for large matrices.
Example
julia> A0=[1 3; 4 5]; A1=[3 4; 5 6];
julia> id_op=S -> one(S) # Note: We use one(S) to be valid both for matrices and scalars
julia> exp_op=S -> exp(S)
julia> nep=SPMF_NEP([A0,A1],[id_op,exp_op]);
julia> compute_Mder(nep,1)-(A0+A1*exp(1))
2×2 Array{Float64,2}:
0.0 0.0
0.0 0.0abstract AbstractSPMF <: ProjectableNEPAn AbstractSPMF is an abstract class representing NEPs which can be represented as a sum of products of matrices and functions $M(λ)=Σ_i A_i f_i(λ)$, where i = 0,1,2,..., all of the matrices are of size $n×n$ and $f_i$ are functions.
Any AbstractSPMF has to have implementations of get_Av() and get_fv() which return the functions and matrices.
NonlinearEigenproblems.NEPTypes.get_Av — Function.get_Av(nep::AbstractSPMF)Returns an array of matrices $A_i$ in the AbstractSPMF: $M(λ)=Σ_i A_i f_i(λ)$
NonlinearEigenproblems.NEPTypes.get_fv — Function.get_Av(nep::AbstractSPMF)Returns an Array of functions (that can be evaluated both as scalar and matrix functions) $f_i$ in the AbstractSPMF: $M(λ)=Σ_i A_i f_i(λ)$
Projectable NEP types
There are also types associated with projection described on the projection manual page:
SumNEP
It is also possible to consider NEPs that are sums of other NEPs. For such situations there are SumNEPs. Specifically GenericSumNEP and SPMFSumNEP. Both are constructed using the function SumNEP.
NonlinearEigenproblems.NEPTypes.SumNEP — Function.SumNEP{nep1::NEP,nep2::NEP}
SumNEP{nep1::AbstractSPMF,nep2::AbstractSPMF}SumNEP is a function creating an object that corresponds to a sum of two NEPs, i.e., if nep is created by SumNEP it is defined by
where $M_1$ and $M_2$ are defined by nep1 and nep2.
Example:
julia> nep1=DEP([ones(3,3),randn(3,3)])
julia> nep2=PEP([ones(3,3),randn(3,3),randn(3,3)])
julia> sumnep=SumNEP(nep1,nep2);
julia> s=3.0;
julia> M=compute_Mder(sumnep,s);
3×3 Array{Float64,2}:
8.54014 6.71897 7.12007
-0.943908 -13.0795 -0.621659
6.03155 -7.26726 -6.42828
julia> M1=compute_Mder(nep1,s);
julia> M2=compute_Mder(nep2,s);
julia> M1+M2 # Same as M
3×3 Array{Float64,2}:
8.54014 6.71897 7.12007
-0.943908 -13.0795 -0.621659
6.03155 -7.26726 -6.42828See also: SPMFSumNEP, GenericSumNEP
struct GenericSumNEP{NEP1<:NEP,NEP2<:NEP} <: NEPSee also: SumNEP, SPMFSumNEP
struct SPMFSumNEP{NEP1<:AbstractSPMF,NEP2<:AbstractSPMF} <: AbstractSPMF{AbstractMatrix}See also: SumNEP, GenericSumNEP
Low-rank NEPs
struct LowRankFactorizedNEP <: AbstractSPMF
function LowRankFactorizedNEP(L::Vector,U::Vector,f::Vector)
function LowRankFactorizedNEP(L::Vector,U::Vector,A::Vector, f::Vector)Representation of a NEP which has low rank in the sense that it is an SPMF where each of the terms are factorized: A[i]=L[i]*U[i]'. The factorization is provided in the L and U vectors and the full matrix A[i] can be either provided (or is otherwise implicitly computed).
Example:
julia> L=randn(5,1); U=randn(5,1);
julia> f=S->exp(S)
julia> nep=LowRankFactorizedNEP([L],[U],[f]);
julia> X=randn(5,2);
julia> norm(compute_Mlincomb(nep,0.0,X)-L*U'*X*ones(2),1)
6.661338147750939e-16CORK data types
struct CORKPencil
function CORKPencil(M,N,Av,Bv,Z);
function CORKPencil(nep,is)The struct CORKPencil represents a pencil with a particular structure, as given in the reference. The data can either be constructed directly via the first constructor, or from a NEP in the second constructor. The second constructor takes a NEP and is which specifies a CORK-structure as well as an approximation method. This can be objects of the type IarCorkLinearization or NleigsCorkLinearization, which are the CORK-linearizations equivalent to (certain versions of) iar and nleigs.
See buildPencil how to build standard pencil.
Example:
The following example constructs a CORKPencil from a general NEP and then computes approximations of NEPs by the interpolation approach of nleigs.
julia> using LinearAlgebra
julia> A=(1:4)*(1:4)'+I; B=diagm(1 => [1,2,3]); C=ones(4,4);
julia> f1= λ-> one(λ);
julia> f2= λ-> λ;
julia> f3= λ-> exp(sin(λ/2));
julia> nep=SPMF_NEP([A,B,C],[f1,f2,f3]);
julia> cp=CORKPencil(nep,NleigsCorkLinearization());
julia> (A,B)=buildPencil(cp) # Form the pencil
julia> λv=eigen(A,B).values;
julia> λ=λv[sortperm(abs.(λv))[1]]; # Smallest eigval
julia> minimum(svdvals(compute_Mder(nep,λ))) # It's a solution
2.4364382475487156e-11References:
Compact rational Krylov methods for nonlinear eigenvalue problems SIAM Journal on Matrix Analysis and Applications, 36 (2), 820-838, 2015.
(A,B)=buildPencil(cp)Constructs a pencil from a CORKPencil or CORKPencilLR. The returned matrices correspond to a generalized eigenvalue problem. See also CORKPencil, CORKPencilLR. See [lowRankCompress] for examples.
struct CORKPencilLR
function CORKPencilLR(M,N,Av,AvLR,Bv,BvLR,Z);Represents / constructs a low-rank CORK-pencil. AvLR, BvLR and Z correspond to the low-rank factorization of terms Av and Bv. See CORKPencil and reference below.
Example
The example illustrate a low-rank linearization of a Taylor expansion of the NEP $A0-λI+vv^Te^{-λ}$.
julia> A0=[1.0 3.0; -1.0 2.0]/10;
julia> v=reshape([-1.0 ; 1]/sqrt(2),n,1);
julia> Av=[-A0-v*v']
julia> Bv=[-one(A0)-v*v']
julia> BvLR=[v/2, -v/3, v/4, -v/5, v/6, -v/7, v/8, -v/9]
julia> AvLR=zero.(BvLR);
julia> Z=v;
julia> d=9;
julia> M=diagm( 0 => ones(d) )[2:end,:]
julia> N=diagm( -1 => 1 ./ (1:d-1) )[2:end,:]
julia> cplr=CORKPencilLR(M,N,Av,AvLR,Bv,BvLR,Z);
julia> (AA,BB)=buildPencil(cplr);
julia> λ=eigen(AA,BB).values[end];
julia> minimum(svdvals(A0-λ*I+v*v'*exp(-λ)))
8.870165379112754e-13References:
- Section 7 in Compact rational Krylov methods for nonlinear eigenvalue problems SIAM Journal on Matrix Analysis and Applications, 36 (2), 820-838, 2015.
cplr=lowRankCompress(cp_org::CORKPencil,dtilde,rk)Constructs a CORKPencilLR from a CORKPencil. This is done by assuming that terms higher than dtilde are of low rank, with rank rk. More precisely, all A[j] and B[j] for j>dtilde are assumed to be of the form $C_jZ^T$.
Example
This illustrates how to form a CORKPencil from a NEP, and subsequently form a smaller pencil using CORKPencilLR.
julia> A0=[1.0 3.0; -1.0 2.0]/10;
julia> v=[-1.0 ; 1]/sqrt(2);
julia> nep=DEP([A0,v*v']);
julia> cp_org=CORKPencil(nep,IarCorkLinearization(d=10));
julia> cplr=lowRankCompress(cp_org::CORKPencil,1,1);
julia> (AA,BB)=buildPencil(cplr);
julia> λ=eigen(AA,BB).values[end];
julia> minimum(svdvals(A0-λ*I+v*v'*exp(-λ))) # Check if it is an eigval
2.7621446071952216e-14Helper types
There are also the helper types Mder_NEP and Mder_Mlincomb_NEP. These are further described in the section about Compute functions