[ Download ] [ Table of Contents ] [ Key features ] [ Publications ]
[ Contact ] [ About ]
NAClab -- Numerical
Algebraic Computing Toolbox for Matlab
Copyright ©
2012 by Zhonggang Zeng. All rights reserved.
Updated on August 18, 2018
Unique capabilities: Solving hypersensitive problems from empirical data
Intuitive WYSIWYG polynomial
computation interface in Matlab
Solving linear operator equation L(z) = b directly from linear transformation L even if it is rank-deficient
New! Rank-r Newton's iteration solving nonlinear systems f(x) = 0 with non-isolated solutions and singular Jacobians
solving nonlinear least squares problem with Gauss-Newtion iterations.
Featured modules: Polynomial systems, Jordan Canonical Forms, multiplicities of nonlinear systems,
polynomial GCD / factorizations, ....
Research toolbox: Generic linear equation solver without matrices, Gauss-Newton iteration, rank-r Newton's iteration ....
Software building blocks [Link] NAClab a joint research project in development by Tien-Yien Li and Zhonggang
Zeng It is an expansion from its predecessor Apalab
(and its Maple counterpart is
ApaTools). . Disclaimer:
NAClab is an on-going
research project, Bugs
and
shortcomings likely exist. The authors are not
responsible for any
damage or loss from using this software. Users should verify
their
results through whatever means.
Project leaders: Tien-Yien Li and Zhonggang Zeng (Contact via email: zzeng at neiu dot edu )
Colaborators/contributors: Liping Chen, Tianran Chen, Wenrui Hao, Tsung-Lin Lee, Wenyuan Wu, Andrew Sommese
Solving linear equations L(z) = b
- directly from linear transformation L without representation matrix
- including homogeneous equation L(z) = 0
- even if L is rank-deficient
NAClab can directly solve a general linear equation L(z) = b for its general
numerical solution where L is a linear transformation between
(products of) vector spaces. As a special case, the module LinearSolve
solves L(z) = 0 for the numerical kernel. [ A short paper on LinearSolve ]
A general numerical solution within an error tolerance is in the form of
z = z0 + c1*z1+...+ck*zk
where z0 is the minimum norm solution, c1,..,ck are constants and {z1,...,zk} forms a basis for the numerical kernel of L within the error tolerance
For a demo, NAClab outputs polynomials f and g such that p f + q g = u where p, q are given polynomials of degrees m and n respectivly and u = GCD(p,q) of degree k. In other words, NAClab solves the linear equation
L(f,g) = u
where L is the linear transformation defined by L(f,g) = p f + q g with fixed parameters p, q.
Users do not need to provide representation matrices for the
linear transformations unless they choose to do so. NAClab
generates representation matrices internally.
The complete process for this example is as follows
Define an annonymous function for the linear transformation
>> map2gcd = @(f,g,p,q) pplus(ptimes(p,f),ptimes(q,g));
Enter polynomials f, g, u as character strings, provide the domain and parameter for the linear transformation:
>> f = '5.99999 - 9*x + 3*x^3 - 4*x^4 + 6*x^5 - 2*x^7';
>> g = '-2 + 5*x - 3*x^2 - x^3 + x^4 + 2*x^5 - 3.00002*x^6 + x^8';
>> u = '2-2.99999*x+1.00001*x^3';
>> domain = {'1+x+x^2+x^3+x^4+x^5','1+x+x^2+x^3+x^4'};
>> parameter = {f,g};
Just like that, we are ready to call LinearSolve with error tolerance 1e-4
>> [Z,K,lcond,res] = LinearSolve({map2gcd,domain,parameter}, u, 1e-4);
The minimum norm solution:
>> Z
Z =
'0.315213638005548 + 0.0325895733429298*x +
0.021703…' '-0.054358027177057 +
0.0434095337991119*x + 0.10852…'
The numerical kernel within the error tolerance 1e-4:
>> K{:}
ans =
'0.249999389461494 - 0.250001189336557*x -
0.2500021…' '0.74999762066815 -
0.500002203566776*x^4'
Rank-r Newton's iteration (preprint with examples, syntax)
- solve
nonlinear system f(x) = 0 even if the solution is non-isolated
- directly from smooth mapping f between products of vector spaces
- even if the Jacobian is rank-deficient
Solving polynomial systems
by the homotopy method based on HOM4PS
Polynomials systems can be solved for numerical solutions in a
straightforward and intuitive setting. For example: To solve the
following polynomial system with 20 zeros
-x^5 + y^5 - 3*y - 1 = 0
5*y^4-3 = 0
-20*x+y-z = 0
simply call "psolve" with straitforward input [ more details ]
>> P = {'-x^5+y^5-3*y-1','5*y^4-3','-20*x+y-z'}; % define the polynomial system
>> [Solutions, variables] = psolve(P) % call the polynomial system solver
Solutions =
Column 1
0.778497746685646 + 0.893453081179308i
-0.000000000000000 + 0.880111736793394i
-15.569954933712914 -16.988949886792764i
... ...
Column 20
0.778497746685646 - 0.893453081179308i
0.000000000000000 - 0.880111736793393i
-15.569954933712925 +16.988949886792767i
variables =
'x' 'y' 'z'
Direct polynomial
manipulation:
Polynomials can now be entered and shown as character strings, such as
>>
f = '3*x^2 - (2-5i)*x^3*y^4 - 1e-3*z^5-6.8'
>>
g = '-2*y^3 - 5*x^2*z + 8.2'
Using
polynomials strings as input, users can now perform common
polynomial operations such as addition, multiplication,
power,
evaluation, differentiation, factorization, extracting
coefficients, finding greatest common divison (GCD), by calling NAClab
functions, such as
>>
p = PolynomialPlus('2*x^5-3*y','4+x*y')
% add any number
of polynomials
>>
q = PolynomialTimes(f,g,h) % multiply any number
of polynomials
>>
v = PolynomialEvaluate(f,{'x','z'},[3,4]) % evaluate
f(x,y,z) for x=3, z=4
and a lot more.
For example, to compute a greatest common divisor:
>>
f = '10 - 5*x^2*y + 6*x*y^2 - 3*x^3*y^3';
>> g = '30 + 10*y + 18*x*y^2 + 6*x*y^3'
>> u = PolynomialGCD(f,g)
u =
33.5410196624968 + 20.1246117974981*x*y^2
Key features:
The functions for
- Solving polynomial systems
- Polynomial GCD
- Polynomial Factorization
- Multiplicity and dual spaces of a nonlinear system at a zero
- Numerical Jordan Canonical Form
- Numerical Rank revealing and updating/downdating
are
major development in both algorithm design and software
implementations. For instance, the univariate factorization is based on
an
award winning paper on accurate computation of multiple roots
Table
of contents
List of functions:
Functions are in three categories: Queck-access functions,
Programming
tools for polynomial compuations, and Matrix compuation tools
Functions in this category accept input polynomials as character
strings, such as
>>
f = '3*x^2 - (2-5i)*x^3*y^4 - 1e-3*z^5-6.8'
>>
g = '-2*y^3 - 5*x^2*z + 8.2'
A list of quick access functions are as follows. Use, say >>
help GetVariables, to access documentations.
Many functions have shortened aliases. For example, pvar
is an alias for GetVariables.
- GetVariables
(alias pvar)
--- extracting variable names of a polynomial
- PolynomialCoefficient
(alias pcoef)
--- extracting a coefficient
- PolynomialDerivative
(alias pder)
--- partial derivative of a polynomial
- PolynomialEvaluate
(alias peval)
--- polynomial evaluation
- PolynomialFactor
(alias pfac)
--- numerical polynomial factorization (multivariate irreducible
factorization is an on-going joint work with Wenyuan Wu)
- PolynomialGCD
(alias pgcd)
--- computing the numerical greatest common divisor of a
polynomial
- PolynomialPlus
(alias pplus)
--- addition of any number of polynomials
- PolynomialMinus
(alias pminus)
--- a polynomial minus another
- PolynomialTimes
(alias ptimes)
--- multiplication of any number of polynomials
- PolynomialPower
(alias ppower)
--- a polynomial raised to a power
- PolynomialNorm
(alias pnorm)
--- calculating the 2-norm of a polynomial
- PolynomialSupport
(alias psup)
--- extracting the monomial support of polynomials
- PolynomialClear
(alias pclear)
--- Clearing tiny coefficients, real parts and imaginary
parts of a polynomial
- PolynomialJacobian
(alias pjac)
--- Generating the (symbolic) Jacobian matrix of a polynomial
system
- psolve --- Solve polynomial systems by homotopy method
- TotalDegree
--- extracting the total degree of a polynomial
- TupleDegree
--- extracting the tuple degree of a polynomial
- FactorDistance --- calculate the distance between two polynomial factorizations
- RandomPolynomial
(alias prand)
--- generating a random polynomial
- Multiplicity
--- computing the multiplicity structure, a.k.a. dual space,
of a
polynomial system at an isolated zero (joint work with Wenrui Hao and
Andrew Sommese)
- ShowDual--- Display the dual basis computed by Multiplicity
- ConvolutionMatrix
--- generating the convolution matrix of a univariate
polynomial
- SylvesterMatrix
--- generating the Sylvester matrix of a univariate polynomial
- BezoutMatrix
--- generating the Bezout matrix of two univariate polynomials
- CompanionMatrix
--- generating the companion matrix of a univariate polynomial
MacaulayMatrix
--- generating the Macaulay matrix for multiplicity structure of a multiple zero of a polynomial system
Programming tools for
polynomial computations:
Functions
in this category are designed for experts for developing algorithms and
implementations for numerical polynomial algebra. These functions
requires polynomials represented as either a coefficient matrix or a
coefficient vector.
A univariate polynomial is represented by its coefficient
vector. For example,
f(x) = x3
+ 2x2 +
3x + 4 is represented as
>> f =
[1, 2, 3, 4] % or its
transpose
An m-multivariate
polynomial f of n
terms is represented by a coefficient
matrix F
of size (m+1) by
n according to the following rules:
1. Every column
of F represents one term
(monomial)
2. F(i,j)
for i<m+1 is the exponent
of the i-th variable on j-th
term
3. F(m+1,j)
is the coefficient of the j-th term
For example, let p(x, y, z) = 8.5 + (3-2i)x3
y + 5 x2 z5
- 2 y3 + 6 y3
z2 .
Users can transform its string representation to coefficient
matrix representation by calling the quick access function PolynString2CoefMat
:
>> f = '8.5 + (3-2i)*x^3*y + 5*x^2*z^5 - 2*y^3
+ 6*y^3*z^2';
>> F = PolynString2CoefMat(f,{'x','y','z'})
F =
0
3.0000
0
0 2.0000
0
1.0000
3.0000
3.0000
0
0
0
0
2.0000 5.0000
8.5000
3.0000 - 2.0000i
-2.0000
6.0000 5.0000
The following are a list of expert functions as software building blocks
- Main specialty functions
- LinearTransformMatrix
--- generate the matrix from a given linear transformation
- LinearSolve --- solve a linear operator equation and matrix-vector equations
- GaussNewton
--- solve a nonlinear least squares problem by the
Gauss-Newton iteration
- Newton --- Conventional and rank-r Newton's iteration for solving operator equations f(x)=0
- Transition functions between quick-access and expert
functions
- Functions for univariate polynomial compuation
- uvGCD
--- computing the GCD of a univariate polynomial pair
- uvGCDfixedDegree
--- compute the numerical GCD of two univariate polynomials
with a given GCD degree
- ExtendedGCD
--- computing polynomials p and q such
that pf+qg =
gcd(f,g)
- uvFactor
--- univariate factorization
- ConvolutionMatrix
--- computing the convolution matrix of a polynomial
- SylvesterMatrix
--- computing the Sylvester matrix
- Functions for multivariate polynomial computation
- mvPolynAdd
--- add two polynomials represented as coefficient matrices
- mvPolynClear
--- clear tiny coefficients, real parts and imaginary parts
of a polynomial represented as a coefficient matrix
- mvPolynCoefMat2Vec
--- convert a coefficient matrix to a coefficient vector (in
sparse form) of a polynomial
- mvPolynCoefVec2Mat
--- convert a coefficient vector to a coefficient matrix of
polynomial
- mvPolynDegree
--- extract the (tuple or total) degree of a polynomial
represented as a coefficient matrix
- mvPolynDerivative
--- take a partial derivative of a polynomial represented as
a coefficient matrix
- mvPolynEvaluate
--- evaluate a polynomial represented as a coefficient matrix
- mvPolynIndexer
--- generate the indexing vector from a tuple degree bound
- mvPolynMonomTotal
--- calcuate the number of monomials from a given degree bound
- mvPolynMonomBasis
--- Generate monomial basis from a given degree bound
- mvPolynMultiply
--- multiply two polynomials represented as coefficient
matrices
- mvPolynPower
--- raise a power of a polynomial represented as a
coefficient matrix
- mvPolynProject
--- project an m-variate polynomial to an n-variate
polynomial (m>n) by setting m-n variables as constant.
- mvPolynSort
--- sort polynomial terms in lexicographical order
- mvPolynTimesMonom
--- multiply a polynomial by a monomial
- GrlexMonomialIndex --- Find the index of a monomial in the graded lexicographical order
- GrlexNextMonomial --- Find the next monomial in the graded lexicographical order
- SquarefreeFactor
--- calculate a squarefree factorization of a
polynomial
- mvFactorRefine
--- refine a coprime factorization of a polynomial
- mvGCD
--- compute the numerical GCD of two multivariate polynomials
- mvGCDfixedDegree
--- compute the numerical GCD of two multivariate polynomials
with a given GCD degree
Matrix
computation functions
- Major
packages
- NumericalJordanForm
--- compute the numerical Jordan Canonical Form of a complex
matrix even if entries are perturbed
- NumericalRank
--- compute the numerical rank, range and kernel of a matrix
- NumericalRankUpdate
--- update the numerical rank, range and kernel of a matrix
after inserting a row or column
- NumericalRankdowndate
--- update the numerical rank, range and kernel of a matrix
after deleting a row or column
Relevant publications
NAClab: A Matlab toolbox for numerical algebraic computation, Z. Zeng and T.Y, Li, ACM Comm. in Computer Algebra, (47) 2013, pp. 170-173
Intuitive interface for solving linear and nonlinear system of equations, Z. Zeng, Proc. of ICMS 2018, 2018
Computing multiple roots of inexact
polynomials, Z. Zeng, Mathematics of Computation, 74(2005),
pp 869 - 903
Algorithm 835:
MultRoot -- A Matlab
package for computing polynomial roots and
multiplicities, Z. Zeng, ACM Transaction
on Mathematical Software, 30(2004), pp 218-315
Multiple zeros of nonlinear systems , B.H. Dayton, T.-Y. Li and Z. Zeng, Mathematics of Computation, (80)2011, pp. 2143-2168
The numerical factorization of polynomials
W. Wu and Z. Zeng, Foundations of Computational Mathmatics, DOI
10.1007/s10208-015-9289-1
Sensitivity and computation of a defective eigenvalue Z. Zeng, preprint, [Resources / Matlab codes]
Regularization and matrix computation in
numerical polynomial algebra Z. Zeng,
Mixed volume computation. A revisit T.-L. Lee and T..-Y. Li
HOM4PS-2.0: A software package for solving polynomial systems by the polyhedral homotopy continuation method T.-L. Lee, T..-Y. Li and C.-H. Tsai
Algorithm 931: An algorithm and software for computing multiplicity structure at zeros of nonlinear systems, W. Hao, A.J. Sommese and Z. Zeng, ACM TOMS
ApaTools:
A software toolbox for approximate polynomial algebra,
Zhonggang Zeng, in Software
for Algebraic Geometry, IMA
Volume 148, M.S.
Stillman et al. eds., Springer, pp149-167,
2008
Download and installation:
1. Download the zip file (If your Matlab is older than 2013 edition, try this file )
2. Unzip and extract the .p and .m files in a folder, such as c:/NAClab
3. Open Matlab and set path to the folder, e.g.
>> path(path,'c:/NAClab')
NAClab is then ready to go.
Modules
with brief explanations and examples
- NulVector
The module for computing the smallest singular value of an
upper-triangular matrix along with the corresponding left and right
singular vectors. In many cases, it is desirable to determine if a
matrix is approximately rank-deficient without computing a full-blown
singular value decomposition. Module NulVector
is designed for this purpose. It also serves as a submodule
for
ApproxiRank that computes the
approximate rank and approximate kernel.
syntax:
>> LHS = RHS
Input parameters:
R -- mxn upper triangular matrix (m ≥ n)
theta -- (optional) the rank threshold
scale -- (optional) ||R||the rank threshold
m, n -- (optional) size of R
Output parameters:
s -- an approximation to the smallest singular value
x -- the unit approximate null vector
y -- the unit left approximate null vector
Example:
>>
A % a seemingly
full-rank matrix
A =
1 10
0 0 0
0 0 0
0 0
0 1
10 0 0
0 0 0
0 0
0
0 1 10
0 0 0
0 0 0
0
0 0 1
10 0 0
0 0 0
0
0 0 0
1 10 0
0 0 0
0
0 0 0
0 1 10
0 0 0
0
0 0 0
0 0 1
10 0 0
0
0 0 0
0 0 0
1 10 0
0
0 0 0
0 0 0
0 1 10
0
0 0 0
0 0 0
0 0 1
>> [s,x,y] = NulVector(A,1e-10);
>>
s
% the smallest singular value
is tiny:
s =
9.900000000000001e-010
>> [x,y] % the
right and left singular vectors
ans =
-0.99498743710662
-0.00000000098504
0.09949874371066
0.00000000994888
-0.00994987437107
-0.00000009949864
0.00099498743711
0.00000099498743
-0.00009949874371
-0.00000994987437
0.00000994987437
0.00009949874371
-0.00000099498743
-0.00099498743711
0.00000009949864
0.00994987437107
-0.00000000994888
-0.09949874371066
0.00000000098504
0.99498743710662
>> [norm(A*x), norm(y'*A)] % verify them as approxi-nulvectors
ans =
1.0e-009 *
0.99000000000001 0.99000000000001
- NumericalRank
The function NumericalRank
computes the numerical rank rankθ (A)
and the numerical kernel Kθ
(A) of a given
matrix A within a given rank
threshold θ.
The definitions of the approximate rank and approximate kernel are
rankθ
(A) = the smallest rank of
all the matrix B such
that ||B-A|| < θ.
Kθ
(A) =
the kernel of the matrix B that
is the nearest to A among all
matrices whose exact rank equals to rankθ (A).
<Syntax>
[r,Basis,C] = NumericalRank(A,tol,HL)
<Input Parameters>
1. A -- the target
matrix;
2. tol -- (optional) the rank decision
threshold;
default: tol = sqrt(n)*norm(A,1)*eps;
3 HL -- (optional)
Set to 'high rank' if A is a high rank matrix. (default)
Set to 'low rank' if A is a low rank matrix.
<Output Parameters>
1.
r -- the numerical rank of A;
2. Basis -- (optional)
For high rank cases...
a matrix whose columns form an orthonormal basis of
the numerical kernel;
For low rank cases...
a matrix whose columns form an orthonormal basis of
the numerical range;
3. C -- (optional) Matlab cell
array contains information required
by updating/downdating;
For high rank cases...
C{1,1} = rank : the numerical rank of A;
C{2,1} = Basis : matrix whose columns form an orthonormal kernel basis;
C{3,1} = Q : the Q in the QR decomposition of the kernel stacked matrix;
C{4,1} = R : the R in the QR decomposition of the kernel stacked matrix;
C{5,1} = tau : the scaling factor in the kernel stacked matrix;
C{6,1} = tol : the rank decision threshold;
For low rank cases...
C{1,1} = rank : the numerical rank of A;
C{2,1} = U : U in the USV+E decomposition of A;
C{3,1} = V : V in the USV+E decomposition of A;
C{4,1} = S : S in the USV+E decomposition of A;
C{5,1} = tol : the rank decision threshold;
<Reference>
[1] T.Y. Li and Z. Zeng, "A
Rank-Revealing Method with Updating, Downdating
and Applications", SIAM J. Matrix Anal. and Appl., 26 (2005), pp.
918--946.
[2] T.L. Lee, T.Y. Li and Z.
Zeng, "A Rank-Revealing Method with Updating,
Downdating and Applications, Part II", SIAM J. Matrix Anal. and Appl.
(2009).
- NumericalRankUpdate
<Purpose>
Computing the
numerical rank of a updated matrix.
<Syntax>
[r,Basis,C] =
NumericalRankUpdate(A,pth,vec,C,RC)
<Input Parameters>
1. A -- the target
matrix;
2. pth -- the index of row/column to be
inserted;
3. vec -- the row/column vector to be
inserted;
4. C -- cell array
contains information required by updating/downdating;
5. RC -- Set to 'row', then
the pth row will be inserted.
Set to 'column', then the pth column will be inserted.
<Output Parameters>
1.
r -- the numerical rank of the
updated matrix;
2. Basis --
For high rank cases...
a matrix whose columns form an orthonormal basis of
the numerical kernel;
For low rank cases...
a matrix whose columns form an orthonormal basis of
the numerical range;
3. C -- Matlab cell array
For high
rank cases...
C{1,1} = rank : the numerical rank of the updated matrix;
C{2,1} = Basis : matrix whose columns form an orthonormal kernel basis;
C{3,1} = Q : the Q in the QR decomposition of the kernel stacked matrix;
C{4,1} = R : the R in the QR decomposition of the kernel stacked matrix;
C{5,1} = tau : scaling factor in the kernel stacked matrix;
C{6,1} = tol : the rank decision threshold;
For low
rank cases...
C{1,1} = rank : the numerical rank of the updated matrix;
C{2,1} = U : the U in the USV+E decomposition of the updated matrix;
C{3,1} = V : the V in the USV+E decomposition of the updated matrix;
C{4,1} = S : the S in the USV+E decomposition of the updated matrix;
C{5,1} = tol : the rank decision threshold;
- NumericalRankDowndate
<Purpose>
Computing the
numerical rank of a downdated matrix.
<Syntax>
[r,Basis,C] =
NumericalRankDowndate(A,pth,C,RC)
<Input Parameters>
1. A -- the target
matrix;
2. pth -- the index of row/column to be
deleted;
3. C -- cell array
contains information required by updating/downdating;
4. RC -- Set to 'row', then
the pth row will be deleted.
Set to 'column', then the pth column will be deleted.
<Output Parameters>
1.
r -- the numerical rank of the
downdated matrix;
2. Basis --
For high rank cases...
a matrix whose columns form an orthonormal basis of
the numerical kernel;
For low rank cases...
a matrix whose columns form an orthonormal basis of
the numerical range;
3. C -- Matlab cell array
For high
rank cases...
C{1,1} = rank : the numerical rank of the downdated matrix;
C{2,1} = Basis : matrix whose columns form an orthonormal kernel basis;
C{3,1} = Q : the Q in the QR decomposition of the kernel stacked matrix;
C{4,1} = R : the R in the QR decomposition of the kernel stacked matrix;
C{5,1} = tau : scaling factor in the kernel stacked matrix;
C{6,1} = tol : the rank decision threshold;
For low
rank cases...
C{1,1} = rank : the numerical rank of the downdated matrix;
C{2,1} = U : the U in the USV+E decomposition of the downdated matrix;
C{3,1} = V : the V in the USV+E decomposition of the downdated matrix;
C{4,1} = S : the S in the USV+E decomposition of the downdated matrix;
C{5,1} = tol : the rank decision threshold;
- NumericalJordanForm
Computing a numerical Jordan decomposition
X*J*inv(X)
of a given matrix A within a distance threshold theta, formulated under
a 'three-strikes principle'. For details, see
Z. Zeng and T.Y. Li, A numerical
algorithm for computing the Jordan
Canonical Form, preprint, 2007
In a nutshell, let A be a matrix whose entries are
given approximately
with error magnitude being small. The exact JCF of A
will be degraded.
However, it is possible to recover the underlying Jordan structure and
approximate X and J by
NumericalJoranForm
Syntax: There are several choices for either LHS or RHS
>>
LHS
= RHS
------------------- | --------------------------------------
[J,X]
| NumericalJordanForm(A)
[J,X,e,s,t] |
NumericalJordanForm(A,theta)
| NumericalJordanForm(A,theta,tau,gap)
Input: A -- matrix whose AJCF is to be computed
theta -- (optional) distance
threshold
tau --
(optional) deflation threshold, simple eigenvalues
whose geometric condition numbers above tau will be deflated
gap --
(optional) singular value gap used in rank revealing
Output: J -- The Approximate Jordan Canonical Form
X -- The principle vector matrix
e -- the list of distinct eigenvalues
s -- The Jordan block sizes of the eigenvalues
t -- the residuals (backward errors) and the staircase condition
numbers of the eigenvalues
Example: One can construct a test matrix with a given Jordan
Form by
>>
A = MatrixWithGivenJordanForm([2 3],{[3 2 1], [2 2]})
A =
0
5 0 -3
0 0 -1
0 0 -2
-10 1 10
0 9 0
1 -4 -3 0
-2 5
2 -3 0 0
-1 0 0 -2
-17 0 16
1 15 0 1
-7 -5 -1
6 -1
-5 1 -3
0 0 3
2 1
0
0 0 0
0 2 0
0 0 0
-7 -2
7 0 6
0 4 -2 -2 0
6 0
-6 0 -6
0 0 6
2 0
9 -3
-6 3 -6
0 0 3
5 3
10 -7 -6
5 -6 0
1 3 2 6
Its AJCF can be computed as follows:
>>
[J,X,e,s,t] = NumericalJordanForm(A);
>> J
J =
3.0000 1.0835
0
0
0
0
0
0
0 0
0
3.0000
0
0
0
0
0
0
0 0
0
0
3.0000 0.9464
0
0
0
0
0 0
0
0
0
3.0000
0
0
0
0
0 0
0
0
0
0
2.0000 0.6457
0
0
0 0
0
0
0
0
0 2.0000
-3.5426
0
0 0
0
0
0
0
0
0
2.0000
0
0 0
0
0
0
0
0
0
0 2.0000
-1.1010 0
0
0
0
0
0
0
0
0
2.0000 0
0
0
0
0
0
0
0
0
0 2.0000
The distinct eigenvalues:
>>
e
e =
3.0000
2.0000
The Jordan block sizes
>>
s
s =
2 2 0
3 2 1
The backward error of the multiple eigenvalues
>>
t(:,1)
ans =
1.0e-016 *
0.4897
0.6711
The backward error of the Jordan decomposition X*J*inv(X)
>> norm(A-X*J*inv(X))
ans =
1.488853685812562e-13
The staircase condition numbers
>>
t(:,2)
ans =
33.5184
78.4598
- uvGCD
Computing the approximate greatest common divisor
(AGCD) of a univariate polynomial pair (f,g) within
a distance threshold q, denoted
by u=GCDq(f,g).
Computing exact GCD is an ill-posed problem whose solution will
generically degrade drastically by a tiny perturbation on the given
polynomial coefficients. The AGCD is, on the other hand,
continuously dependent on the problem data, and uvGCD can
accurately computes the underlying GCD even if the given polynomial
pair is inexact.
The choice of parameter q:
Let (f,g) be the given polynomial pair that
approximates an underlying pair (p,q) whose GCD is
to be computed. Then the threshold q
needs to
satisfy ||(f,g)
- (p,q)|| < q
< t where t
is the distance from (f,g) to
the nearest polynomial pair (r,s) whose GCD has a
higher degree. In other words, the lower bound of q is the magnitude of
data error and the upper bound is generally an unknown
number. The rule of thumb is that to choose q slightly larger than
the magnitude of data error. For example, if the coefficients
of (f,g) are accurate to around machine precision
(2.2e-16), then it is likely safe to set q = 1e-10.
See the reference: Zhonggang
Zeng, The
approximate GCD of inexact polynomials, I: a univariate algorithm,
Preprint (2004)
Example:
>> f
f =
3 8 14 20 11 4
>> g
g =
5 14 26 40 30 20 11 4
>>
[u,v,w,res,cond] = uvGCD(f,g,1.0e-10)
u =
8.30662386291807
16.61324772583615
24.91987158875423
33.22649545167231
v =
0.36115755925731
0.24077170617154
0.12038585308577
w =
0.60192926542885
0.48154341234308
0.36115755925731
0.24077170617154
0.12038585308577
res =
1.614869854000228e-016
cond =
45.53195717395850
Here,
the syntax: >>
[u, v, w,res, cond] =
uvGCD(f,g,theta)
Input:
f, g -- coefficient vectors of
polynomial f and g
theta -- tolerance within which maximum degree GCD is sought
Output:
u -- the
approximate GCD
v -- the cofactor for f
w -- the cofactor for g
res -- the backward error, or residual
cond -- the condition number of the approximate GCD
More
specifically, for given (f,g) and q,
the module uvGCD calculates a triplet (u,v,w)
of polynomials satisfying the following "three-strikes" principles:
(a) Backward
nearness: ||(f,g)
- (uv,uw)||
< q
(b) Maximum
co-dimension: Polynomial pairs (uv,uw)
belongs to the manifold
Pk
= {(p,q)
| GCD(p,q) is of degree k}
that has the highest co-dimension amoung all such manifolds
intersecting the q-neighborhood
of (f,g).
(c) Minimum
distance: The pair (uv,uw)
is the nearest to (f,g) amoung all polynomial pairs in Pk.
The polynomial u=GCD(uv,uw)
is called an AGCD of (f,g).
- ExtendedGCD
ExtendedGCD computes the extended GCD in approximate sense.
For a given polynomial pair (f,g), it computes a polynomial pair
(p,q) such that p f + qg
= GCD(f,g) in
approximate sense.
Calling
syntax:
>> [p, q] = ExtendedGCD(v,w);
Input: v, w -- approximate cofactors, as in the output of
>>[u,v,w] = uvGCD(f,g,tol)
Output p, q -- polynomials such that p*f+q*g = GCD(f,g)
approximately
Example:
>> f = [3 8 14 20 11
4];
% set polynomials f and g
>> g = [5 14 26 40 30 20 11 4];
>> [u,v,w,res,cond] = uvGCD(f,g); % calculate the GCD
triplet (u,v,w)
>> [p,q] =
ExtendedGCD(v,w)
% calculate the extended GCD using v and w
p =
-4.75485272428887
-0.98068837438458
-0.89153488580416
-1.18871318107222
q =
2.85291163457332
0.20802480668764
>>
h = conv(p,f)+conv(q,g); % compute p*f+q*g
>> h'
ans =
0.00000000000000
0.00000000000001
0
0.00000000000001
0
-0.98068837438458
-1.96137674876916
-2.94206512315374
-3.92275349753832
>>
u
% compare with u=GCD(f,g)
u =
-0.98068837438458
-1.96137674876916
-2.94206512315374
-3.92275349753832
- uvFactor
The module uvFactor computes an approximate
factorization (a1 x - b1 )m1
(a2 x - b2 )m2
... (ak x - bk )mk
of a given polynomial f(x)
within a given tolerance θ
for data perturbation.
When the coefficients of f(x)
are approximate (inexact) and the multiplicities mj's
are nontrivial, finding such a factorization has been a long standing
challenge in numerical computation since the factorization problem in
exact sense is ill-posed. Namely, a tiny perturbation on the
coefficients of the given polynomial f almost
always degrades the accuracy of the factors and multiplicities
drastically. The module uvFactor in NAClab,
however, is designed to calculate the factors and multiplicities
accurately even if the input polynomial is perturbed.
Syntax:
>> [F,res] = uvFactor(f, theta )
>> [F,res] = uvFactor(f, theta, 1 )
Input: p -- coefficient vector
of the polynomial to be factored
tol -- (optional) backward error tolerance
showroots -- (optional, 0 or 1) showing roots or
not
Output F -- kx3 matrix
containing factors with
each row [ai , bi
,mi ]
representing a factor (aix + bi)mi
Example: We construct
polynomial p(x)
= ( x-1 )40 (
x-2 )30 ( x-3 )20
( x-4 )10
with coefficients round to 16 digits.
>>
p = poly([ones(1,40),2*ones(1,30),3*ones(1,20),4*ones(1,10)]);
>> [F,res] = uvFactor(p,1e-9)
F =
0.51445747892761
-2.05782991571042 10.00000000000000
0.67077048679987
-2.01231146039961 20.00000000000000
0.94861271967198
-1.89722543934395 30.00000000000000
1.49988840579248
-1.49988840579248 40.00000000000000
res =
8.584956337989512e-016
The roots and multiplicities can be shown by
>>
[-F(:,2)./F(:,1),F(:,3)]
ans =
4.00000000000002 10.00000000000000
2.99999999999998 20.00000000000000
2.00000000000001 30.00000000000000
1.00000000000000 40.00000000000000
Or, use the root-showing flag parameter
>>
[F,res] = uvFactor(p,1e-9,1);
THE CONDITION NUMBER: 6.70133
THE BACKWARD ERROR: 3.51e-015
THE ESTIMATED FORWARD ROOT ERROR: 4.70e-014
FACTORS
( x - 3.999999999999993 )^10
( x - 3.000000000000007 )^20
( x - 1.999999999999998 )^30
( x - 1.000000000000000 )^40
- mvGCD
Computing the approximate greatest common divisor
(AGCD) of a multivariate polynomial pair (f,g)
within a distance threshold q, denoted
by u=GCDq(f,g).
Computing exact GCD is an ill-posed problem whose solution will
generically degrade drastically by a tiny perturbation on the given
polynomial coefficients. The AGCD is, on the other hand,
continuously dependent on the problem data, and mvGCD can
accurately computes the underlying GCD even if the given polynomial
pair is inexact.
The choice of parameter q:
Let (f,g) be the given polynomial pair that
approximates an underlying pair (p,q) whose GCD is
to be computed. Then the threshold q
needs to
satisfy ||(f,g)
- (p,q)|| < q
< t where t
is the distance from (f,g) to
the nearest polynomial pair (r,s) whose GCD has a
higher degree. In other words, the lower bound of q is the magnitude of
data error and the upper bound is generally an unknown
number. The rule of thumb is that to choose q slightly larger than
the magnitude of data error. For example, if the coefficients
of (f,g) are accurate to around machine precision
(2.2e-16), then it is likely safe to set q = 1e-10.
Reference:
Z. Zeng and B.H. Dayton, The
approximate GCD of inexact polynomials, II: a multivariate algorithm,
Proc. of ISSAC 04, pp320-327, 2004.
Syntax:
>> [u,v,w,res,cond] = mvGCD(p,
q, theta, ctrl)
where
Input:
p,
q ---
coefficient matrices of p and q
theta
---(optional)
the
residual tolerance
ctrl---(optional)
control parameter,
if ctrl = 1 then refinement will be
performed.
otherwise
the code stops without refinement.
Output: u --- the AGCD of p and
q
v, w
--- the co-factors
res
--- the residual
c ond --- the AGCD condition number (if
ctrl = 1 ).
Example:
f
=
0 1
2 3
4 0 1
2 0 1 2 0 0
0 0 0 0 0 1 1 1 2 2 2 3 4
4 4
-3 -2 1 12 6 -6 13 2
-2 6 1
g
=
0 1
2
3 4 0
1 2 3
0 1 2 0 1 0
0 0 0 0
0 1
1
1 1
2 2 2 3 3 4
4 12 13 6 1 12 26 18 4 13 18 6 6 4 1
>>
[u,v,w,res,cond] = mvGCD(f,g)
u =
0 1.0000
2.0000
0
1.0000
0
0
0
0
1.0000
1.0000 2.0000
-10.3923 -20.7846 -10.3923
-20.7846 -20.7846 -10.3923
v =
0 1.0000
2.0000
0
1.0000
0
0
0
0
1.0000
1.0000 2.0000
-0.3849 0.3849
-0.0962
-0.3849 0.1925 -0.0962
w =
0 1.0000
2.0000
0
1.0000
0
0
0
0
1.0000
1.0000 2.0000
-0.3849
-0.3849 -0.0962
-0.3849 -0.1925 -0.0962
res =
3.5527e-015
cond =
1.4832
- SquarefreeFactor
For a given multivariate polynomial p,
the module SquarefreeFactor
computes a squarefree factorization
p0
(p1)m1 (p2)m2 ...
(pk)mk
of p along with
multiplicities m1 , m2
, ... mk accurately
even if the polynomial is perturbed. Here p0
is a constant factor and pj 's
are nontrivial polynomial factors for j
> 0.
Syntax:
>> [F,res,cond] = SquarefreeFactor(p,tol);
Input p -- a multivariate polynomial (as
a coeff. matrix) to be factored
tol --
(optional) backward error tolerance
Output F -- cell of size kx2. For each j = 1, 2,
..., k, F{j,1}, F{j,2}
are a squarefree factor and its multiplicity respectively
res --
residual
cond -- condition
number of the factorization
Example:
We construct a factorable polynomial
>>
p = [2 1 0 0; 0 1 2 0; 1 1 -1 1];
>> q = [3 0 3 0; 2 2 0 0; 1 1 -2 -2];
>> r = [3 0 1 0; 0 3 2 0; 1 -1 -3 2];
>> u = mvPolynPower(p,4);
>> v = mvPolynPower(q,3);
>> w = mvPolynMultiply(u,v);
>> f = mvPolynMultiply(w,r);
>> [F,res,cond] = SquarefreeFactor(f,1e-10)
F =
[3x1 double] [1]
[3x4 double] [1]
[3x4 double] [3]
[3x4 double] [4]
res =
9.094947017729282e-013
cond =
0.00417001580803
Factors can be retrieved as follows:
>>
F{1,1} % the constant factor
ans =
0
0
2.32164695714406
>> F{2,1} % factor of multiplicity F{2,2} = 1
ans =
0
3.00000000000000 1.00000000000000 0
0
0
2.00000000000000 3.00000000000000
-1.19889333343897
-0.59944666671943 1.79834000015840 0.59944666671949
>> F{3,1} % factor of multiplicity F{3,2} = 3
ans =
0
3.00000000000000
0
3.00000000000000
0
0
2.00000000000000 2.00000000000000
1.46833846147498
1.46833846147490 -0.73416923073738 -0.73416923073740
>> F{3,1}
ans =
0
2.00000000000000
1.00000000000000 0
0
0
1.00000000000000 2.00000000000000
1.16082347857208
1.16082347857205 1.16082347857192
-1.16082347857206
Even if one make a substantial perturbation, a squarefree factorization
can still be calculated up to the data accuracy:
>>
g(3,:) = f(3,:)+1e-6*rand(1,size(f,2)); % perturbed
polynomial
>> [F,res,cond] = SquarefreeFactor(g,1e-4,1)
F =
[3x1 double] [1]
[3x4 double] [1]
[3x4 double] [3]
[3x4 double] [4]
res =
1.216959617522662e-006
cond =
0.00417001629401
>> F{2,1}
ans =
0
3.00000000000000 1.00000000000000 0
0
0
2.00000000000000 3.00000000000000
-1.19889332029971
-0.59944665012542 1.79834000569698 0.59944664442427
- mvFactorRefine
For a given initial factorization (stored in F)
p0
(p1)m1 (p2)m2 ...
(pk)mk
of multivariate polynomial p, the module mvFactorRefine attempts to
improve the accuracy by applying the Gauss-Newton iteration module
GaussNewton.
This module is used as a submodule for the module
SquarefreeFactor.
Syntax: >>[G,res,cond] = mvFactorRefine(F, p);
Input: F -- a kx2 cell containing the
factors and multiplicities of the
initial factorization of p. For each j = 1, 2, ..., k,
F{j,1} is the j-th factor with multiplicity F{j,2}.
p -- multivariate polynomial in coeff. matrix
Output:
G -- a cell containing the refined factors in the same format as F
res -- the residual, i.e., backward error of the factorization
cond -- the condition number of the factorization
-
Polynomial2CoefficientMatrix
convert a polynomial to coefficient matrix
(requires Symbolic Math Toolbox)
A coefficient matrix F for an m-variate polynomial
p of n terms satisfies:
(1) (m+1) x n
(2) each term of p
is represented by a column of F
(3) F(m+1,k) is the
coefficient of the k-th term of p
(4) F(j,k) for j=1:m
is the exponent for the j-th variable
in k-th term
Syntax: >> F =
Polynomial2CoefficientMatrix(p,u)
>> F = Polynomial2CoefficientMatrix(p,[x,y,z])
Input:
p -- polynomial
u or [x,y,z] -- vector of variables
Oupput: F -- coefficient matrix
Example:
>> syms x y z % >> p = 3 +
x^3*y^2 - 4*y*z^5 + 8*x^2*y^4*z^4
>> F = Polynomial2CoefficientMatrix(p,[x,y,z])
F =
0
3
0 2
0
2
1 4
0
0
5 4
3
1
-4 8
- mvPolynDegree
Computing the
total or tuple degree
Syntax: >> d = mvPolynDegree(p,'total')
>> d = mvPolynDegree(p,'tuple')
Example:
>>
f
f =
0 3
6 0 3
0 0 3
0 0
0 0
0 3 3
6 0 0
3 0
0 0
0 0 0
0 3 3
3 6
-2 -1 1
-1 2 1
-1 2 2 1
>> d = mvPolynDegree(f,'total')
d =
6
>> d = mvPolynDegree(f,'tuple')
d =
6
6 6
- mvPolynTimesMonom:
Compute a
multivariate polynomial p times a monomial t
Syntax: >> f = mvPolynTimesMonom(p,t)
Example:
>>
f
f =
0 3
6 0 3
0 0 3
0 0
0 0
0 3 3
6 0 0
3 0
0 0
0 0 0
0 3 3
3 6
-2 -1 1
-1 2 1
-1 2 2 1
>> t = [2,3,1,5]'
t =
2
3
1
5
>> mvPolynTimesMonom(f,t)
ans =
2 5
8 2 5
2 2 5
2 2
3 3
3 6 6
9 3 3
6 3
1 1
1 1 1
1 4 4
4 7
-10 -5 5 -5
10 5 -5 10
10 5
- mvPolynAdd:
Compute the addition f+g of two multivariate polynomials f and g,
or the linear combination s*f + t*g if scalars s and t are also given
Syntax: >> d = mvPolynAdd(f,g)
>> d = mvPolynAdd(f,g,s,t)
Input: f, g -- multivariate polynomials in coeff. matrices
s, t --
(optional) complex numbers
Output: multivariate polynomial f+g, or s*f + t*g
Example:
>>
f
f =
0 2
4 0 2 0
0 0
0 2 2 4
-2 -1 1 -1
2 1
>> p
p =
3 5
7 3 5 3
1 1
1 3 3 5
8 12 4
12 8 4
>> r = mvPolynAdd(f,p,1,-1) % compute r=f-p
r =
0
2 4 3
5 7 0
2 3 5
0 3
0
0 0 1
1 1 2
2 3 3
4 5
-2 -1 1 -8
-12 -4 -1 2 -12
-8 1 -4
- mvPolynMultiply:
Compute the product f*g of two multivariate polynomials f and g
Syntax: >> h = mvPolynMultiply(f,g)
Input: f, g -- multivariate polynomials in coeff. matrices
Output: multivariate polynomial f*g
Example:
>>
f
f =
0 2
4 0
0 0
0 2
-4 -2 2 -2
>> g
g =
3 5 7
1 1 1
8 12 4
>> mvPolynMultiply(f,g)
ans =
3 5
7 9
11 3
5 7
1 1
1 1
1 3
3 3
-32 -64
-24 16 8
-16 -24 -8
- mvPolynPower
Compute the power f^k of the
multivariate polynomials f
Syntax: >> h = mvPolynPower(f,k)
Input: f -- multivariate polynomials in coeff. matrices
k -- integer exponent of the power
Output: multivariate polynomial f^k
Example:
>>
f
f =
0
2 4 0
0
0 0 2
-4 -2 2 -2
>> mvPolynPower(f,3)
ans =
0
2 4
6 8
10 12
0 2
4 6
8 0
2 4 0
0
0 0
0 0
0 0
2 2
2 2
2 4
4 4 6
-64 -96 48
88 -24 -24
8 -96 -96
72 48 -24 -48
-24 24 -8
- mvPolynEvaluate:
Evaluate multivariate polynomial f at x
Syntax:
>> y = mvPolynEvaluate(f,x)
Input: f --
multivariate polynomial (as a coeff. matrix)
x -- vector value for the variables
utput: the value of f(x)
Example:
>>
f
f =
0
2 4 0
0
0 0 2
-4 -2
2 -2
>> mvPolynEvaluate(f,[1;1])
ans =
-6
- mvPolynDerivative
Compute the partial
derivative of the multivariate polynomial p
with respect to the j-th variable
Syntax: >> q = mvPolynDerivative(p,j)
Input:
p -- polynomial in coefficient matrix
j -- index of the variable w.r.t. which the derivative is to be taken
Output: multivariate polynomial as the partial derivative
Example:
>>
p = [2 1 0 0; 0 1 2 0; 1 1 -1 1]
p =
2 1
0 0
0 1
2 0
1 1
-1 1
>> mvPolynDerivative(p,1)
ans =
1 0
0 1
2 1
>> mvPolynDerivative(p,2)
ans =
1 0
0 1
1 -2
- mvPolynClear
Clear tiny/zero coefficients
of a multivariate polynomial f
Syntax:
>> g =
mvPolynClear(f)
% clear zero coefficients of f
>> g =
mvPolynClear(f,epsilon) % clear coefficients whose magnitudes
are less than or equal to epsilon
Input: f -- multivariate polynomial (as a
coeff. matrix)
epsilon -- magnitude threshold of
coefficients to be cleared
Output: multivariate polynomial in coefficient matrix
Example:
>>
h
h =
3
5 7
9 11
3 5 7
1
1 1
1 1
3 3 3
-32 0
-24 16 0
-16 0 -8
>> mvPolynClear(h)
ans =
3
7 9
3 7
1
1 1
3 3
-32 -24 16
-16 -8
>> g
g =
3.0000e+000 5.0000e+000
7.0000e+000 3.0000e+000 5.0000e+000
3.0000e+000
1.0000e+000 1.0000e+000
1.0000e+000 3.0000e+000 3.0000e+000
5.0000e+000
8.0000e+000 9.5013e-013
4.0000e+000 1.2000e+001 8.0000e+000
2.3114e-013
>> mvPolynClear(g,1e-10)
ans =
3
7 3 5
1
1 3 3
8 4
12 8
- HouseholderVector:
computing the Householder vector u such that
H = I - 2*u*u'
makes Hv = [*;0;...;0]
syntax >> u = HouseholderVector(v)
Example:
>>
v = rand(5,1)
v =
0.8381
0.0196
0.6813
0.3795
0.8318
>> u = HouseholderVector(v)
u =
0.8922
0.0078
0.2698
0.1503
0.3294
>> (eye(5)-2*u*u')*v
ans =
-1.4152
-0.0000
-0.0000
-0.0000
-0.0000
- HouseholderTransform:
computing the Householder transformation w = H*v with
H = I - 2*u*u'
syntax >> w = HouseholderTransform(u,v)
Example:
>>
v
v =
0.8381
0.0196
0.6813
0.3795
0.8318
>> u = HouseholderVector(v)
u =
0.8922
0.0078
0.2698
0.1503
0.3294
>> w = HouseholderTransform(u,v)
w =
-1.4152
-0.0000
-0.0000
-0.0000
-0.0000
-
HouseholderTransformRight:
computing the Householder transformation from right side w =
v*H with
H = I - 2*u*u'
syntax >> w = HouseholderTransformRight(u,v)
Example:
>>
a = v'
a =
0.8381
0.0196 0.6813
0.3795 0.8318
>> u = HouseholderVector(v);
>> w = HouseholderTransformRight(u,a)
w =
-1.4152 -0.0000 -0.0000
-0.0000 -0.0000
- GivensMatrix:
Generating a Givens' (2x2) matrix T such that
T*v = [d; 0]
Input: v -- a vector of dimension 2
Output: T -- Givens' matrix
d -- the 2-norm of v
syntax >> [T,d] = GivensMatrix(v)
Example:
>>
v
v =
0.7907
-1.5935
>> [T,d] = GivensMatrix(v)
T =
0.4445 -0.8958
0.8958 0.4445
d =
1.7789
>> T*v
ans =
1.7789
0.0000
- IntegerInverse:
generate a pair of nxn integer matrices that are inverse
to each other
syntax >> [X,Y] = IntegerInverse(n)
>> [X,Y] = IntegerInverse(n,m) % entries in interval
[-m,m]
This module is mainly used to generate test cases
Example:
>>
[X,Y] = IntegerInverse(5)
X =
1 0 -1
0 0
0 1 0
-1 -1
-1 0
2 0 0
0 -1 0
2 1
0 -1 0
1 2
Y =
2 0
1 0 0
0 3
0 1 1
1 0
1 0 0
0 1
0 1 0
0 1
0 0 1
>> X*Y
ans =
1 0
0 0 0
0 1
0 0 0
0 0
1 0 0
0 0
0 1 0
0 0
0 0 1
- MatrixWithGivenJordanForm
Construct a matrix with given Jordan Canonical Form
Syntax: There are several choices for either LHS or RHS
>>
LHS
=
RHS
------------------- | --------------------------------------
A
| MatrixWithGivenJordanForm(s,j)
[A,X]
| MatrixWithGivenJordanForm(s,j,k)
[A,X,Y]
|
Input:
s -- eigenvalues in each
Jordan block
j -- size of each Jordan block
k -- (optional) number of
(simple) eigenvalues of a kxk random matrix
Output:
A -- a matrix whose
eigenvalues and Jordan structure are defined by
input s, j and k
X,Y -- matrices such that Y*A*X is the Jordan
Canonical Form of A
Example: To generate a matrix with
eigenvalues | Jordan block sizes
2
| 4, 3, 2
3
| 2, 2
>>
[A,X,Y] = MatrixWithGivenJordanForm([2 2 2 3 3],[3 2 1 2 2])
A =
-2 8 -4
-3 1 -1 -1
-1 1 1
4 -4
5 1 0
0 1 2
1 -2
8 -15 10
6 -3 2
2 2 -2 -2
-1 5 -1
-1 4 -2 -1
-1 2 0
0
0 0 0
2 0 0
0 0 0
0
2 1 -2
1 1 0
1 2 -1
2 -6
4 1 -2
1 4 4
1 -2
0
0 0 0
0 0 0
3 0 0
-10 21 -11 -7
5 -3 -3 -5
4 4
-5 8
-8 2 -2 1
-1 -4 -4 7
X =
1 0
-1 0 0
1 0 0
1 0
0
1 1 0
0 1 0
0 -1 -1
-1 1
3 -1 0
0 0 0
-3 0
0 0
-1 2 0
0 -1 0 2 -1
0
0 0 0
1 1 0
0 0 0
1
1 0 0
1 4 0
0 0 -1
0
0 0 -1
0 0 2
0 -1 0
0
0 0 0
0 0 0
1 0 1
1 -1
-3 2 0
0 -1 0
5 1
0 -1
0 -1 0 -1
0 1 1 5
Y =
6 -3
3 3 1
-1 1 0
-1 0
-3 7 -4
-2 1 -1 -1
-1 0 1
3 -4
4 1 0
0 1 1
1 -1
3 -2
1 4 0
0 1 -1 -2 1
1
1 0 0
2 -1 0
0 0 0
-1 -1
0 0 -1
1 0 0
0 0
1 -1
1 1 0
0 1 0
0 0
0 -1
1 -1 0
0 0 2
1 -1
-1 0
1 -2 0
0 0 1
2 -1
0 1
-1 1 0
0 0 -1 -1 1
The results can be verified:
>>
Y*A*X
ans =
2
1 0 0
0 0 0
0 0 0
0
2 1 0
0 0 0
0 0 0
0
0 2 0
0 0 0
0 0 0
0
0 0 2
1 0 0
0 0 0
0
0 0 0
2 0 0
0 0 0
0
0 0 0
0 2 0
0 0 0
0
0 0 0
0 0 3
1 0 0
0
0 0 0
0 0 0
3 0 0
0
0 0 0
0 0 0
0 3 1
0
0 0 0
0 0 0
0 0 3
- ScaledLeastSquares:
Scaled least squares solver for A*x = b
Input: A -- matrix
v --
right-side vector
w --
(optional) weight vector
Output: (return) -- the least squares solution
syntax >> x = ScaledLeastSquares(A,b)
>> x = ScaledLeastSquares(A,b,w)
- GetVariables (alias pvar)
Extract
variable names from a polynomial string
Syntax: >> z = GetVariables(f)
or: >> z = pvar(f)
Input: f --- (string) polynomial
Output: z --- (cell) variable
names of f
Example: >> g = '-2*x + 9*x*y^2 - 9*x^2*y^2 +
8*x^3*y^2';
>> z = GetVariables(g)
z =
'x' 'y'
- PolynomialCoefficient (alias pcoef)
Extract the
coefficient of a monomial t in the polynomial p
Syntax: (pcoef is the shortened alias
for PolynomialCoefficient)
>> c = PolynomialCoefficient(p,t)
Input: p
--- (string) polynomial
t --- (string) monomial
Output: c ---
(numeric or string) the coefficient of t in p
Example: >> p = '-2*x +
9*x*y^2 - 9*x^2*y^2 + 8*x^3*y^2';
>> c = PolynomialCoefficient(p,'y^2')
c =
9*x - 9*x^2 + 8*x^3
>> d = PolynomialCoefficient(p,'x*y^2')
d =
-9
- PolynomialDerivative (alias pder)
Calculate a (partial) derivative of a polynomial f
Syntax: (pder is the shortened alias for
PolynomialDerivative)
>> p =
PolynomialDerivative(f)
% if f is univariate
>> p =
PolynomialDerivative(f,'x',k) % k-th derivative of f(x)
>> p =
PolynomialDerivative(f,z)
% partial derivative of f
w.r.t. all variables in z
>> p =
PolynomialDerivative(f,z,k)
Input: f ---
(string)
polynomial
z --- (string or cell) variable name(s)
k --- (integer or vector, optional) derivative orders,
assumed to be ones if not provided
Output: p --- (string)
(partial) derivative of f
Example: >> f = '3*x^5+2*x+8';
>> Diff(f)
ans =
2 + 15*x^4
>> g = '-2*x + 9*x*y^2 - 9*x^2*y^2 + 8*x^3*y^2';
>> PolynomialPolynomialDerivative(g,{'x','y'},[2,1])
ans =
-36*y + 96*x*y
- PolynomialEvaluate (alias peval)
Evaluate a polynomial at given values of (some or all)
values. The result
can be a number or a polynomial of remaining variables
Syntax: (peval is the shortened alias for PolynomilaEvaluate)
>> s = PolynomialEvaluate(f, var, z)
Input: f --- (string) polynomial
var --- (cell) variable names
z --- (vector) values of the variables in var
Output: s --- (numeric or string) value of the
polynomial
Example: >> g = '-2*x + 9*x*y^2 - 9*z^2*y^2 +
8*x^3*y^2*z^4';
>> PolynomialEvaluate(g,{'x','z'},[2,1])
ans =
-4 + 73*y^2
- PolynomialFactor (alias pfac)
Numerical factorization of a polynomial, univariate or
multivariate
Syntax: (pfac is the shortened alias for PolynomilaFactor)
>> s = PolynomialFactor(f)
>> s = PolynomialFactor(f, tol)
>> s = PolynomialFactor(f, tol, 'row')
Input: f --- (string) polynomial
tol --- (numeric) coefficient error tolerance
style ---
(character) 'row' or anything else
Output: p --- (cell array or string) the default
output p is a mx2 cell
where p{k,1} is the k-th factor with multiplicity p{k,2}
if the input item style = 'row', the output p is a string
in a row showing the factorization
res --- (numeric) residual, namely backward error
fcnd --- (numeric) condition number
Example 1: Univariate factorization of (x-1)^20
(x-2)^15 (x-3)^10 (x-4)^5
>> p =
ptimes(ppower('x-1',20),ppower('x-2',15),ppower('x-3',10),ppower('x-4',5))
>> pfac(p)
ans =
'x-4' [ 5]
'x-3' [10]
'x-2' [15]
'x-1' [20]
Example 2: A multivariate factorization
>> p = '1125 + 1500*x*y*z + 500*x^2*y^2*z^2 +
675*x*y^3*z^2 +
900*x^2*y^4*z^3 + 300*x^3*y^5*z^4 + 135*x^2*y^6*z^4 + 180*x^3*y^7*z^5 +
60*x^4*y^8*z^6 + 9*x^3*y^9*z^6 + 12*x^4*y^10*z^7 + 4*x^5*y^11*z^8'
>> G = pfac(pp,1e-10,'row')
G =
(1125) * (1 + 0.666666666666667*x*y*z)^2 * (1 +
0.2*x*y^3*z^2)^3
PolynomialGCD
(alias
pgcd)
Compute the numerical greatest common divisor of two
polynomials
Syntax: (pgcd is the shortened alias of PolynomialGCD)
>> [u, v, w, residual, condition] = PolynomialGCD(f, g)
>> [u, v, w, residual, condition] = PolynomialGCD(f, g,
tol)
>> [u, v, w, residual, condition, p, q] =
PolynomialGCD(f, g)
>> [u, v, w, residual, condition, p, q] =
PolynomialGCD(f, g, tol)
Input: f ---
(string)
polynomial one
g ---
(string)
polynomial two
tol --- (vector, optional) error tolerace in coefficients
with default 1.0e-10
Output: u
--- (string) GCD of f and g
v --- (string) cofactor of f such that f = u*v
w --- (string) cofactor of g such that g = u*v
residual --- (numeric) backward error ||(f,g)-(u*v,u*w)||
condition
--- (numeric) sensitivity measure of the numerical GCD
p,q --- (strings) polynomials for p*f+q*g=u in univariate case
Example:
>> f = '-45*x*y - 15*x^3*y - 20*x*y^2 +
27*x*y^3 + 9*x^3*y^3 + 12*x*y^4';
>> g = '45*x^2*y^2 + 15*x^2*y^3 - 27*x^2*y^4 -
9*x^2*y^5';
>> [u,v,w,res,cond] =
PolynomialGCD(f,g);
>> {u; v; w; res; cond}
ans =
'-70*x*y + 42*x*y^3'
'0.642857142857143 + 0.214285714285714*x^2 + 0.285714285714286*y'
'-0.642857142857143*x*y - 0.214285714285714*x*y^2'
[3.552713678800501e-014]
[ 1.089132505368521]
PolynomialPlus(alias pplus)
Polynomial addition h = f1 + f2 + ... + fk
Syntax: (pplus is the shortened alias
for PolynomialPlus)
>> p = PolynomialPlus(f,g)
>> p = PolynomialPlus(f,g,h)
>> p = PolynomialPlus(f1,f2,f3,f4) % etc
Input: f,
g, h, ... --- (string or numeric) polynomials
Output: p ---
(string) sum of the input polynomials
Example:
>>
PolynomialPlus('3+x^2','5*y^2+6*z^3','(5+3i)*x*z')
ans =
3 + x^2 + 5*y^2 + (5+3i)*x*z + 6*z^3
PolynomialMinus(alias pminus)
Polynomial subtraction h = f - g
Syntax: >> h = PolynomialMinus(f,g)
or >> h = pminus(f,g)
Input: f
--- (string or numeric) polynomial one
g --- (string or numeric) polynomial two
Output: h ---
(string) sum f-g
Example: >>
PolynomialMinus('5*x^2*y+8','3*x^2*y+2')
ans =
6 + 2*x^2*y
PolynomialMinus(alias pminus)
Extract
the support of input polynomial(s)
Syntax: (psup is the shortened alias for PolynomialSupport)
>> F = PolynomialSupport(f)
>> F = PolynomialSupport(f,g)
>> F = PolynomialSupport(f,g,h)
Input: f, g, h --- (strings) any number
of polynomial strings
Output: F --- (cell) monomial support of
the input polynomials
Example: >> f = '3 + 2*x*y +
5*x^2*y^3';
>> g = '5+3*x*y-6*x^3*y^2';
>> S = PolynomialSupport(f,g)
S =
'1' 'x*y'
'x^3*y^2' 'x^2*y^3'
PolynomialTimes(alias ptimes)
Polynomial multiplication h = f1 * f2 * ... * fk
Syntax: (ptimes is the shortened alias
for PolynomialTimes)
>> p = PolynomialTimes(f,g)
>> p = PolynomialTimes(f,g,h)
>> p = PolynomialTimes(f1,f2,f3,f4) % etc
Input:
f,g,h,... --- (string or numeric) polynomials
Output: p ---
(string) product of the input polynomials
Example:
>> PolynomialTimes('2+x','3-y^2','4+x*z')
ans =
24 - 8*y^2 + 12*x - 4*y^2*x + 6*x*z - 2*y^2*x*z + 3*x^2*z - y^2*x^2*z
PolynomialPower(alias ppower)
Polynomial power h = f^k
Syntax: >> h = PolynomialPower(f,k)
or >> h = ppower(f,k)
Input: f ---
(string) polynomial
k --- (integer) exponent
Output: h --- (string)
polynomial f^k if k is nonnegative
Example: >>
PolynomialPower('x*y+1',3)
ans =
1 + 3*x*y + 3*x^2*y^2 + x^3*y^3
PolynomialNorm(alias pnorm)
Polynomial multiplication ||f||_k
Syntax: (pnorm is the shortened alias of
PolynomialNorm)
>> s = PolynomialNorm(f)
>> s = PolynomialNorm(f,k)
Input: f --- (string
or numeric) polynomial
k --- 1, 2, or Inf
Output: s --- (numeric) k-norm
of f
PolynomialClear(alias pclear)
Clear tiny coefficients, tiny real parts, and tiny imaginary parts
from a polynomial
Syntax: (pclear is the shortened alias of
PolynomialClear)
>> g = PolynomialClear(f)
>> g = PolynomialClear(f,tol)
Input: f --- (string
or numeric) polynomial
tol ---
(numeric)
threshold for being tiny
Output: g --- (string or
numeric) cleared polynomial
Example:
>>
PolynomialClear('(3+2e-13*i) + 2.1e-15*x*y+(1e-14+2*i)*x^3',1e-10)
ans =
3 + (0+2i)*x^3
PolynomialJacobian(alias pjac)
Calculate the Jacobian of a polynomial system
Syntax: (pjac is the shortened alias for
PolynomialJacobian)
>> p =
PolynomialDerivative(F,z)
Input: F --- (cell) polynomial
system as a cell array of strings
z --- (string or cell) variable name(s)
Output: J --- (cell) Jacobian of F
Example: >> F = {'x +
y^2-3','x^2+z^4*y+5','x*z-6'}
>> J = PolynomiaJacobian(F,{'x','y','z'})
J =
'1'
'2*y'
'0'
'2*x' 'z^4'
'4*z^3*y'
'z'
'0'
'x'
- psolve
Numerical solution of polynomial systems by hom4ps Homotopy method
Syntax: >> [S, var] = psolve( P )
Input: P --- (cell array) the polynomial system to be solved
For example:
>> P = {'-x^5+y^5-3*y-1','5*y^4-3','-20*x+y-z'}
Output: S --- (matrix) numerical solutions (as columns of S)
var --- (cell array) the array of variables
For example, output
S =
-0.8264 + 0.6004i -0.6092 - 1.0165i
-0.8801 + 0.0000i -0.0000 - 0.8801i
15.6482 -12.0086i 12.1831 +19.4496i
var = {'x','y','z'}
means there are two numerical solutions
(x,y,z) =
(-0.8264+0.6004i, -0.8801+0.0000i, 15.6482-12.0086i)
(-0.6092-1.0165i, -0.8801+0.0000i, 12.1831+19.4496i)
TotalDegree
Extract the total degree from a polynomial string
Syntax: >> d = TotalDegree(f)
>> d = TotalDegree(f,var)
Input: f --- (string) polynomial
var --- (cell) optional, variable names
Output: d --- (integer) total degree of f
Example: >> p = '-2*x^2 + 9*x^3*y^2
- 9*x^4*y^2 + 8*x^5*y^2';
>> TotalDegree(p)
ans =
7
FactorDistance
Calculate the distance between two factorizations
Given two factorizations stored in 2-column cells F and G,
where the 1st column contains the polynomial factors and
the 2nd column entries are corresponding multiplicities,
FactorDistance calculate their distance that is independent
of scaling, choice of representative and order of factors.
Input: F --- (m x 2 cell array)
F{k,1}: the k-th factor of F as a string
F{k,2}: the multiplicity of the k-th factor
G --- (n x 2 cell array)
G{k,1}: the k-th factor of G as a string
G{k,2}: the multiplicity of the k-th factor
permopt --- (string, optional) 'all': use all permulations
otherwise: the current order only
Example:
f =
'-4888'
[1]
'x*y-2*x+3*y-1' [3]
'3*y^2-5*x^3+4' [6]
g =
'16'
[1]
'352 + 264*y^2 - 440*x^3' [6]
'-6 - 12*x + 18*y + 6*x*y' [3]
>> FactorDistance(f,g)
ans =
5.2663e-016
TupleDegree
Extract the total degree from a polynomial string
Syntax: >> d = TupleDegree(f,var)
Input: f --- (string) polynomial
var --- (cell) variable names in order
Output: d --- (vector) tuple degree of f w.r.t. var
Example: >> p = '-2*x^2 + 9*x^3*y^2
- 9*x^4*y^2 + 8*x^5*y^2';
>> TupleDegree(p,{'x','y'})
ans =
5
3
RandomPolynomial(alias prand)
Generate a random polynomial
Syntax: >> p =
RandomPolynomial(variables,degree,terms)
or
>> p = prand(variables,degree,terms)
Input: variables --- (cell or
string) variables
degree --- (integer or vector) tuple degree bound
for the random polynomial
terms --- (integer) number of terms
Output:
p --- (string) random polynomial
Example: >> p =
RandomPolynomial({'x','y'},[5,2],4)
p =
-2*x^2 + 9*x^3*y^2 - 9*x^4*y^2 + 8*x^5*y^2
Multiplicity
Multiplicity --> Computing the multiplicity and multiplicity
structure
of a system of nonlinear equations at an isolated
zero.
<Synopsis>
m = Multiplicity(f, variables, zero)
m = Multiplicity(f, variables, zero, options)
[m, D, H] = Multiplicity(f, variables, zero)
[m, D, H] = Multiplicity(f, variables, zero,
options)
<Input Parameters>
1.
f
--> a cell
array containing the system of nonlinear
equations as strings, e.g.,
>> f = { 'x^2 + sin(y) -1',
'x-cos(y)' };
2. variables --> a cell
array containing the unknown variables as
strings, e.g.,
>> variables = { 'x', 'y' };
3.
zero --> a
vector containing numerical zero (root) of f, e.g.,
>> zero = [1, 0];
4. options
--> an optional parameter which includes the configuration
settings:
Display: Set to 2 to have all output (the dual space
and Hilbert function) printed to the screen,
and set to 1 to have depth and Hilbert
function printed to the screen. Otherwise
set the default value 0;
Tol: The threshold for numerical rank-revealing.
Singular values above Tol are counted as
nonzero. The default value is 1e-8;
EqnType: The equation type for Multiplicity,
polynomial system ('Poly') or nonlinear
functions ('Nonlinear'). The default value is
'Nonlinear'. By setting the value to 'Poly',
Multiplicity will transfer the polynomial
system to the matrix representation
internally and speed up the computation.
MaxInt: Maximum multiplicity allowed in the recursive
computation. If a zero is not isolated, the
multiplicity will be infinity. The code can
be used for identifying a nonisolated zero by
setting MaxInt to a known upper bound (e.g.
the Bezout number). The default value is 1000.
All the configuration settings may be changed by using
optset function, i.e.,
>> options = optset('para', value);
para could be any configuration setting, value is set
to para. See OPTSET for details. Any configuration
setting that is not changed will be set to its default
value.
<Output Parameters>
1.
m
--> the multiplicity of f at the zero;
2.
D
--> a basis for the dual space as a cell array with
each component being a matrix in the Matlab form of
D{i} = [c_1, j_1;
c_2, j_2;
...
c_n, j_n ];
representing a differential functional
D_i = c_1*d_{j_1} + ... + c_n*d_{j_n}
where d_{j_i}'s are differential monomial functionals
(e.g. For a system of equations with variables {x,y,z}
at the zero x=a, y=b, z=c, the functional d_{[i,j,k]}
applied to any function g is the value of the partial
derivative
i+j+k
1
d
d_{[i,j,k]}(g) = -------- * ----------- g(a,b,c)
i! j! k! i
j k
dx dy dz
The dual space is the vector space consists of such
differential functionals that vanish on the nonlinear
system while satisfying the so-called closedness
condition);
3.
H
--> values of the Hilbert function in a vector.
<Examples>
Consider the nonlinear system
sin(x)*cos(y)-x
= 0
sin(y)*sin(x)^2 - y^2 = 0
at the zero (x, y) = (0, 0), the
multiplicity can be computed by the
following statements:
>> f =
{'sin(x)*cos(y) - x', 'sin(y)*sin(x)^2 - y^2'};
>>
variables = {'x', 'y'};
>>
zero = [0, 0];
>> m =
Multiplicity(f, variables, zero)
To create an options structure with Tol
= 1e-10, MaxInt = 100:
>>
options = optset('Tol', 1e-10, 'MaxInt', 100);
>> m = Multiplicity(f, variables, zero, options)
<Algorithm>
This code implements a modified
closedness subspace method for
multiplicity identification with a newly
developed equation-by-equation
strategy for improving efficiency.
<References>
[1] An algorithm and software for computing
multiplicity structures at
zeros of
nonlinear systems, W. Hao, A. J. Sommesse and Z. Zeng
[2] The closedness subspace method for computing
the multiplicity
structure
of a polynomial system, Z. Zeng.
See also optset, cell
ShowDual
Display a basis for dual space D computed by Multiplicity
Syntax: ShowDual(D)
Input: D --- (cell array) the basis of dual space computed by the
function Multiplicity
Output: Screen display of the dual basis
Example: >> [m,D,H] = Multiplicity({'x^2+sin(y^2)-2*x+1', 'x-cos(y)'}, ...
{'x','y'}, [1,0] )
>> ShowDual(D)
d00
d01
0.447214*d10 -0.894427*d02
-0.816497*d03 +0.408248*d11
MacaulayMatrix
Construct the Macaulay matrix of a polynomial ideal at an
isolated zero
Syntax: >> M = MacaulayMatrix(z, P, var, depth);
Input: z --- (1xn vector) the isolated zero
P --- (1xm cell) the polynomials generating the ideal
var --- (1xn cell) variable names of the ideal
depth --- (integer) depth (differential order) of the Macaulay matrix
Output M --- the Macaulay matrix in sparse format
Example: For an approximate zero x = 0.57735, y = 0.57735 to the system
x^3+y-0.7698 = 0, x+y^3-0.7698 = 0
the call sequence
>> z = [0.57735,0.57735]; P = {'x^3+y-0.7698','x+y^3-0.7698'};
>> var = {'x','y'}; depth = 2;
>> M = MacaulayMatrix(z, P, var, depth)
generates the Macaulay matrix of depth 2.
ConvolutionMatrix
Generate an n-column convolution matirx of .
An n-column convolution matrix of p is the matrix
for the
linear transformation L(f) = p*f for
polynomial f of degree n-1.
If f is the coefficient vector of f, and C is the
convolution matrix,
then C*f is the coefficient vector of p*f.
Syntax: C =
ConvolutionMatrix(p,n)
Input: p
--- (string or vector) the univariate polynomial
n --- (integer) the column dimension of the convolution matrix
Output: C --- the
convolution matrix
Example: >> p =
'1+3*x^4-6*x^8';
>> C = ConvolutionMatrix(p,3)
C =
-6
0 0
0
-6 0
0
0 -6
0
0 0
3
0 0
0
3 0
0
0 3
0
0 0
1
0 0
0
1 0
0
0 1
SylvesterMatrix
generate the k-th Sylverster matrix of f and g
The k-th Sylvester matrix is the matrix of the
linear transformation
L(v,w) = f*w+g*v
for deg(v) = deg(f)-k, deg(w) =
deg(g)-k.
The default value for k is k=1 if not provided,
which is the most
well-known Sylvester matrix.
Syntax: >> S =
SylvesterMatrix(f,g)
>> S = SylvesterMatrix(f,g,k)
Input: f, g --- (strings or
coefficient vectors) polynomials
k --- (integer, optional) see above
Output: S
--- (matrix) The Sylvester matrix
Example: >>
SylvesterMatrix('1+3*x-6*x^3','2+5*x^2')
ans =
-6
0
5
0 0
0
-6
0
5 0
3
0
2
0 5
1
3
0
2 0
0
1
0
0 2
>> SylvesterMatrix('1+3*x-6*x^3','2+5*x^2',2)
ans =
-6
5 0
0
0 5
3
2 0
1
0 2
BezoutMatrix
generate the Bezout matrix B_n (f,g) of polynomials f and g
that is defined so that
f(x)*g(y)-f(y)*g(x)
------------------- = [1,x,...,x^(n-1)]*B_n(f,g)*[1,y,...,y^(n-1)]'
x - y
Syntax: >> B = BezoutMatrix(f,g)
>> B = BezoutMatrix(f,g,n)
Input: f, g --- (strings or coefficient vectors) polynomials
n --- (integer, optional) size of the Bezout matrix,
can not be smaller than either degrees of f or g
default value is the larger degree of f and g
Output: B --- (matrix) The Bezout matrix
Example: >> BezoutMatrix('3*x^3-x','5*x^2+1')
ans =
-1 0 3
0 8 0
3 0 15
>> BezoutMatrix('3*x^3-x','5*x^2+1',4)
ans =
-1 0
3 0
0 8
0 0
3 0
15 0
0 0
0 0
CompanionMatrix
generate the Companion matrix of the polynomials f
Syntax: >> B = CompanionMatrix(f)
Input: f --- (string or coefficient vector) polynomial
Output: C --- (matrix) The companion matrix
Example: >> CompanionMatrix('x^4 + 2*x^3 + 3*x^2 + 4*x+5')
ans =
-2 -3 -4 -5
1 0
0 0
0 1
0 0
0 0
1 0
LinearTransformMatrix
Generic routine for generating the linear transformation matrix
Syntax: M =
LinearTransformMatrix(@F, DR, para)
[M,Range] = LinearTransformMatrix(@F, DR, para)
Input: F
-- Matlab function name for calculate the Lin. Transf.
with syntax F(P,Q1,...,Qk) where P is the polyn.
to be transformed by the linear transformation.
the remaining parameters Q1,...,Qk must be
provided as cell entries for para.
(If the basis for the domain consists only
monomials, F needs to be implemented on monomials
only.)
DR -- (cell) domain and range information
default: DR = {D, dR} where D is a matrix
representing the monomial basis. Each column
of D represents a monomial. dR is the tuple
degree bound for the range as a vector space
consisting all the polynomials with tuple degree
within dR.
para -- (cell) parameters needed for running F
Output: M
-- the (sparse) matrix for the linear transformation
Range -- (optional) the fewnomial basis for the range, if it
is needed
Example: Let T be the linear
transformation g -> f*g
in the domain {x*y^3*z^2, x^2*z^4, y^5*z}
where f = x^2*z^3 + 5*x*y^4*z^6
1. Write a code:
function G = MonomialMultiplyByF(M,F)
%
% function for calculating a monomial multiplied
% by a polynomial
%
% input: M -- a
monomial in a one-column
coefficient
%
matrix
%
F -- a polynomial in a coefficient matrix
%
m = size(M,1)-1;
G = F;
for k = 1:m
G(k,:) = F(k,:) + M(k);
end
2. Prepare input
>> D = [[1; 3;
2], [2; 0; 4], [0; 5; 1]];
>> dR = [4; 9;
10];
>> F = [[1; 0;
3; 1], [1; 4; 6; 5]];
3a. Execute with one output
item requested:
>> M =
LinearTransformMatrix(@MonomialMultiplyByF,{D,dR},{F});
M =
(268,1)
1
(438,1)
5
(354,2)
1
(524,2)
5
(227,3)
1
(397,3)
5
3b Execute with two output
items requested
>> [M, R] =
LinearTransformMatrix(@MonomialMultiplyByF,{D,dR},{F})
M =
(2,1)
1
(5,1)
5
(3,2)
1
(6,2)
5
(1,3)
1
(4,3)
5
R =
2
3
4
2
3 4
5
3
0
9
7 4
4
5
7
7
8
10
GaussNewton
Generic Gauss-Newton iteration routine for the least squares solution of
F(z) = 0
for z in a vector space domain
There are two versions:
(i) generic version: solution consists of components of vectors,
matrices and polynomials.
(ii) column vector version: Iterates on column vectors
Syntax of version (i) i.e. generic version:
[z,res,fcond] = GaussNewton({@F,domain,para}, @G, z0)
[z,res,fcond] = GaussNewton({@F,domain,para}, @G, z0, trac)
[z,res,fcond] = GaussNewton({@F,domain,para}, @G, z0, trac, tol)
Input: F -- (function) Matlab function name for calculating y = F(z)
with syntax >> y = F(z1,...zk,p1,...,pm) where z1,...,zk
are components of z and p1,...,pm are fixed parameters.
(F must run on F(domain{:},para{:}).
domain -- domain of the homomorphism represented by
(i) an mxn matrix of 0' and 1's representing
0 entries and variable entries respectively, or
(ii) a polynomial with variable coefficients being
nonzero, or
(iii) a cell array of matrices in (i) and/or (ii)
assuming the domain is a product of matrix spaces and
polynomial spaces
para -- (cell) parameters needed for running F
G -- (function) Matlab function name for calculating the
Jacobian u = F_z(z0)y F as a homomorphism with syntax
>> u = G(y1,...,yk,z1,...zk,p1,...,pm)
where z1,...,zk,p1,...,pk are the same for F but
considered fixed, the variables are y1,...,yk in the
same domain of F.
z0 -- (cell/matrix/polynomial) the initial iterate for the
Gauss-Newton iteration in the domain
trac -- (0,1,or 2, optional) flag for tracking the iteration
if tracking = 0 or missing, no tracking
if tracking = 1, track the first component z1 of z
if tracking = 2, track all components of z
tol -- (numeric, optional) error tolerance of the iterate
if missing, iteration stops when residule stops
decreasing
Output: z -- the least squares solution in the domain
res -- (optional) the residual ||F(z)||
fcond -- (optional) the condition number
Syntax: column vector version
[z,res,fcond] = GaussNewton(@Func, @Jacb, z0, {p1,...,pn},trac)
Input: Func -- Matlab function name for F(z)
Jacb -- Matlab function name for the Jacobian J(z) of F(z)
both Func and Jacb must be written to accept input
z0 along with other parameters p1, p2, ..., pn
z0 -- (column vector) the initial iterate
{p1,p2,...,pn} -- (cell) parameters for Func and Jacb
trac -- tracking flag, 0, or 1
Output: z -- the least squares solution
res -- (optional) the residual ||F(z)||
fcond -- (optional) the condition number
Syntax of version (ii), i.e. column vector version:
[z,res,fcond] = GaussNewton(@Func, @Jacb, z0, {p1,...,pn},trac)
Input: Func -- Matlab function name for F(z)
Jacb -- Matlab function name for the Jacobian J(z) of F(z)
both Func and Jacb must be written to accept input
z0 along with other parameters p1, p2, ..., pn
z0 -- the initial iterate vector
{p1,p2,...,pn} -- (cell) vector parameters for Func and Jacb
trac -- tracking flag, 0, or 1
Output: z -- the least squares solution vector
res -- (optional) the residual ||F(z)||
fcond -- (optional) the condition number
Newton
Generic rank-r Newton's iteration routine solving F(z) = 0 where
F is a smooth mapping between vector spaces, assuming the Jacobian
is of rank r at the solution that can be non-isolated
(c.f. http://homepages.neiu.edu/~zzeng/Papers/Rank-r_Newton.pdf)
Syntax
[z,res,fcond] = Newton({@F,domain,para}, @G, z0)
[z,res,fcond] = Newton({@F,domain,para}, @G, z0, trac)
[z,res,fcond] = Newton({@F,domain,para}, @G, z0, trac, tol)
[z,res,fcond] = Newton({@F,domain,para}, {@G,r}, z0)
[z,res,fcond] = Newton({@F,domain,para}, {@G,r}, z0, trac)
[z,res,fcond] = Newton({@F,domain,para}, {@G,r}, z0, trac, tol)
Input: F -- (function) Matlab function name for calculating y = F(z)
with syntax >> y = F(z1,...zk,p1,...,pm) where z1,...,zk
are components of z and p1,...,pm are fixed parameters.
(F must run on F(domain{:},para{:}).
domain -- domain of the homomorphism represented by
(i) an mxn matrix of 0' and 1's representing
0 entries and variable entries respectively, or
(ii) a polynomial with variable coefficients being
nonzero, or
(iii) a cell array of matrices in (i) and/or (ii)
assuming the domain is a product of matrix spaces and
polynomial spaces
para -- (cell) parameters needed for running F
G -- (function) Matlab function name for calculating the
Jacobian u = F_z(z0)y F as a homomorphism with syntax
>> u = G(y1,...,yk,z1,...zk,p1,...,pm)
where z1,...,zk,p1,...,pk are the same for F but
considered fixed, the variables are y1,...,yk in the
same domain of F.
r -- (integer, optional) if provided, the rank-r projection
of the Jacobian is used for the iteration
z0 -- (cell/matrix/polynomial) the initial iterate for the
Newton's iteration in the domain
trac -- (0,1,or 2, optional) flag for tracking the iteration
if tracking = 0 or missing, no tracking
if tracking = 1, track the first component z1 of z
if tracking = 2, track all components of z
tol -- (numeric, optional) error tolerance of the iterate
if missing, iteration stops when residule stops
decreasing
Output: z -- the least squares solution in the domain
res -- (optional) the residual ||F(z)||
fcond -- (optional) the condition number
PolynString2CoefMat
convert a polynomial string to a coefficient matrix
Syntax: >> F = PolynString2CoefMat(f,var)
Input: f --- (string)
polynomial string
var --- (cell) variable names in order
Output: F --- (matrix) polynomial in
coefficient matrix
Example: >> p = '6*x^2*y + 8*y^3';
>> PolynString2CoefMat(p,{'x','y'})
ans =
2 0
1 3
6 8
CoefMat2PolynString
Convert a polynomial coefficient matrix to a polynomial string
Syntax: >> p =
CoefMat2PolynString(F,var)
>> p = CoefMat2PolynString(F,var,digits)
Input: F
--- (matrix) Coefficient matrix of a polynomial
var --- (cell) variable names in order
digits --- (integer) optional, number of digits, default = 15
Output: p ---
(string) polynomial as a string
Example: >> F = [2 0; 1 3; 6, 8]
F =
2 0
1 3
6 8
>> CoefMat2PolynString(F,{'x','y'})
ans =
6*x^2*y + 8*y^3
uvGCDfixedDegree
uvGCDfixedDegree computes the numerical greatest common
divisor of a given
polynomial pair (f,g) with a given GCD degree
Syntax:
>> [u,v,w,res,cond] =
uvGCDfixedDegree(f,g,k)
Input f, g -- the polynomial pairs as
coefficient vectors
tol -- (optional) the residual tolerance, default: 1e-10
k -- the GCD degree
Output u,v,w -- the
numerical GCD triplet such that
conv(u,v) approximates f, conv(u,w) approximates g
res -- the (weighted) residual
cond -- the numerical GCD condition number
Example:
>> f = [1 3 6 10
10 9 7 4];
>> g = [1 6 10 13
15 15 10 6 3 1];
>> [u,v,w,res,cond] =
uvGCDfixedDegree(f,g,4)
mvPolynCoefMat2Vec
Convert a coefficient matrix to a (sparse) coefficient vector
Syntax:
>> v = mvPolynCoefMat2Vec(F)
>> v = mvPolynCoefMat2Vec(F,d)
Input: F --
multivariate polynomials in coeff. matrix
d -- (optional) degree bound for the polynomial vector space
if not provided, the tuple degree of F is used
to be added: the case when d is total degree
Output: v -- (sparse
vector) the coefficient vector
Example:
>> F = PolynString2CoefMat('3 + 5.5*x^2 -
7*x*y^3',{'x','y'})
F =
0 2.0000
1.0000
0
0
3.0000
3.0000 5.5000 -7.0000
>> f = mvPolynCoefMat2Vec(F)
ans =
(1,1) 3.0000
(3,1) 5.5000
(11,1) -7.0000
mvPolynCoefVec2Mat
Convert a multivariate coefficient vector to coefficient
matrix
Syntax:
>> F = mvPolynCoefVec2Mat(f,d)
Input: f -- coefficient vector of a
multivariate polynomial
d -- tuple degree of the multivariate polynomial
Output: F -- the coefficient matrix of
the multivariate polynomial
Example:
>> f
f =
(1,1) 3.0000
(3,1) 5.5000
(11,1) -7.0000
>> mvPolynCoefVec2Mat(f,[2,3])
ans =
0 2.0000
1.0000
0
0
3.0000
3.0000 5.5000 -7.0000
mvPolynIndexer
function for generating monomial indexer s = [s(1),...,s(m)].
Given an m-tuple
degree bounded d = [d(1),...,d(m)], a monomial of
degree p = [p(1);
...; p(m)] is the k-th monomial in lexcographical
order where
k = 1 + s*p
Syntax:
>> s = mvPolynIndexer(d)
Input: d --- (vector) an
m-degree
Output: s --- (row vector) the
indexer
mvPolynMonomTotal
Calculate the monomial count within given degree and number of variables
Syntax:
>> n = mvPolynMonomTotal(d)
% if d is a tuple degree
>> n = mvPOlynMonomTotal(d,m) % total degree d
with m variables
Input: d -- tuple degree or total degree
m -- number of varialbles for total degree
Output: the number
of monomials within the degree d
mvPolynMonomBasis
Generate the monomial basis
Input: d --- degree bound
n --- (optional) number of variables if d is
total degree
Output: B --- the coefficient matrix
whose columns
represent the basis
Example:
>>
mvPolynMonomBasis([3;2])
ans =
0
1
2
3
0
1
2
3
0
1
2 3
0
0
0
0
1
1
1
1
2
2
2 2
1
1
1
1
1
1
1
1
1
1
1 1
mvPolynProject
Project an m-variate polynomial to an n-variate polynomial
using given values on the m-n
variables
syntax: G =
mvPolynProjection(F, x, ix)
input: F
--- a multivariate polynomial in coef. matrix
x --- a vector of m-n values to be used for projection
ix --- a vector of indices of the m-n variables
output: G --- the
projected polynomial
Example: To project a
polynomial F(x1,x2,x3,x4) to F(x1,1.6,x2,6.8):
>> G = mvPolynProject(F,[1.6,6,8],[2,4])
mvPolynSort
Sort the terms of a multivariate polynomial f according to
the lexicographical order, and combine the like terms
Syntax: >> G = mvPolynSort(F)
>> G = mvPolynSort(F, tord)
>> [G,idg] = mvPolynSort(F)
>> [G,idg] = mvPolynSort(F, tord)
Input: F -- (matrix) multivariate polynomial in coeff. matrix
tord -- (string, optional) 'lex' or 'grlex', term order type
The default is 'lex' if not provided
Output: G --- (matrix) sorted coeff. matrix of the input polynomial
idg --- (vector) the indices of G terms
Examples:
>> F
F =
0 0
4 3
3 0
0 0
1 0
0 1
0 0
1 0
0 2
8 7
-7 2
4 -5
>> [g,idg] = mvPolynSort(F)
G =
0 3 4 0
0 0 1 1
0 0 1 2
15 6 -7 -5
idg =
1 4 20 26
>> [G,idg] = mvPolynSort(F,'grlex')
G =
0 3 0 4
0 0 1 1
0 0 2 1
15 6 -5 -7
idg =
1 11 19 61
mvGCDfixedDegree
mvGCDfixedDegree computes the nearest greatest common divisor of a
multivariate polynomial pair (F,G) with a fixed GCD degree.
The code is
optimized under the assumption that F, G and the
GCD triplet are sparse
fewnomials
Syntax: >>
[U,V,W,res,cond] = mvPolynGCD(f,g,gcddeg)
Input F, G -- the polynomial pairs as
coefficient matrices
gcddeg -- the tuple degree of the GCD, if known
Output u,v,w -- the
NGCD triplet such that
u*v approximates f, u*w approximates g
res -- the (weighted) residual
cond -- the AGCD condition number
Example:
>> F
F =
0
2
1 3
0
1
1 2
0
0
3 3
3
-6
1 -2
>> G
G =
0
1
1 2
0
0
1 1
0
2
3 5
3
9
1 3
>> [u,v,w,res,cond] = mvGCDfixedDegree(F,G,[1,1,3])
u =
0 1.0000
0 1.0000
0 3.0000
-11.6190 -3.8730
v =
0 2.0000
0 1.0000
0
0
-0.2582 0.5164
w =
0 1.0000
0
0
0 2.0000
-0.2582 -0.7746
res =
3.0461e-034
cond =
0.8969
GrlexMonomialIndex
Find the index of a given monomial in the graded lexicographical order
Syntax: >> idx = GrlexMonomialIndex(m)
>> idx = GrlexMonomialIndex(m,d)
Input m -- (vector) the exponents of the monomial
(e.g. [3;1;2] represents x^3*y*z^2)
d -- (optional) total degree of the monomial
Case 1: If d is not provided, the output index is the
location of the monomial in all possible terms
Case 2: If d is provided, the output index is the location
of the monomial of degree d
Output idx -- the index of the monomial
Example:
>> GrlexMonomialIndex([2;0;1;1])
ans =
45
>> GrlexMonomialIndex([2;0;1;1],4)
ans =
10
GrlexNextMonomial
Find the next monomial in graded lexicographical order
Syntax: >> M1 = GrlexNextMonomial(M0,n)
Input: M0 -- (vector) exponents of a monomial
(e.g. [3;1;2] represents x^3*y*z^2)
n -- (integer) number of variables
Output: M1 -- (vector) exponents of the next monomial
LinearSolve
Solve for the general numerical solution of
(i) matrix-vector equation A*z = b for z = z0+Kernel(A)
(ii) linear operator equation L(z) = b for z = z0+Kernel(L)
within an error tolerance tol, where z0 is the minimum norm solution
and the kernel is the numerical kernle within the error tolerance tol
Syntax for solving A*z = b (within error tolerance tol)
>> [z, K, cnd, res] = LinearSolve(A, b)
>> [z, K, cnd, res] = LinearSolve(A, b, tol)
syntax for solving L(z) = b (within error tolerance tol)
>> [z, K, cnd, res] = LinearSolve({L, Domain, para}, b)
>> [z, K, cnd, res] = LinearSolve({L, Domain, para}, b, tol)
>> [z, K, cnd, res] = LinearSolve({L, Domain, para, r}, b)
>> [z, K, cnd, res] = LinearSolve({L, Domain, para, r}, b, tol)
Input (case i: Matrix-vector equation A*z = b):
A, b -- coefficient matrix and right-side vector in A*z=b
tol -- error tolerance
Input (case ii: Linear operator equation L(z) = b):
L -- Matlab function handle for calculating the linear
transformation with syntax >> L(X1,...,Xm,Q1,...,Qk)
where X1,...,Xm and Q1,...,Qk are variables and
parameters of L respectively. Each one of X1,...,Xm is
(i) a matrix (including vector), or
(ii) a polynomial
to be transformed by L.
The remaining parameters Q1,...,Qk must be
provided as cell entries for para.
If the codomain is a product space, the output of the
function L must be a cell array
Domain -- domain of the linear transformation L represented by
(i) an mxn matrix of 0' and 1's representing
0 entries and variable entries respectively, or
(ii) a polynomial with variable coefficients being
nonzero, or
(iii) a cell array of matrices in (i) and/or (ii)
assuming the domain is a product of matrix spaces and
polynomial spaces
para -- (cell) parameters needed for running L
**** L must run with >> L(Domain{:},para{:}) ***
r -- (integer, optional) if provided, the rank-r TSVD
solution and kernel is computed (In other words,
substitute L and b with rank-r projections)
b -- (matrix, polynomial or cell) The right-hand side of
the equation L(z) = b, must be in the codomain
of L
tol -- (numeric) error tolerance (< 1),
or r = tol > 1 indicating the rank-r projection of
the linear transformation is used
Output: z -- the minimum norm solution so that ||L(z)-b|| is minimized
K -- an orthonormal basis for the kernel {y | L(y) = 0}
cnd -- condition number of the representation matrix.
res -- (vector) the residuals of ||L(Z)-b||,||L(K{1}||,...||L(K{n}||