Sensitivity and Computation of a Defective Eigenvalue

This website is set up to provide source codes and test results for the Algorithm PseudoEig developed in the paper Sensitivity and Computation of a Defective Eigenvalue  by  Zhonggang Zeng


Matlab m-files for Algorithm PseudoEig:
               
PseudoEig.m,   peigmap.m,  peigjac.m, PEigRefine.m    (The NAClab package is required )


Nevigating in this page:


Intuitive implementation of Algorithm PseudoEig 
            using NAClab


The NAClab package (Numerical Algebraic Computing laboratory) provides a quick and intuitive platform for implementing Algorithm PseudoEig.

For details of the algorithm and its theory, see the preprint  Sensitivity and Computation of a Defective Eigenvalue  by  Zhonggang Zeng

A short sketch of the algorithm is a follows.


Method sketch
NAClab provides an intuitive and straightforward platform to implement such an algorithm.  Just follow the steps below.

Step 1.  Download and install NAClab package


Step 2. Write a Matlab function for the mapping g and save it as peigmap.m , which is almost WYSIWYG.

Step 3.  Write a Matlab function for the Jacobian linear transformation and save it as peigjac.m, which is also almost WYSIWYG.

Step 4.  Write the main function PseudoEig and save it as PseudoEig.m,   where the Gauss-Newton iteration is calling the generic module GaussNewton.m in NAClab in a single line

[Z,res,lcond] = GaussNewton({@peigmap,{1,ones(n,k)},{A,C,S}}, @peigjac, {lambda0,X0},1);

By design, the generic module GaussNewton expects input items
   --- the cell includes  (i)  the Matlab function handle for the mapping, i.e.  @peigmap
                                   (ii)  the domain C x C^{n x k} represented by the cell, i.e. {1, ones(n,k)}
                                  (iii)  the parameters for the mapping in the cell, i.e.  {A,C,S}
   --- the Matlab function handle for the Jacobian linear transformation, i.e.   @peigjac
   --- the initial iterate cell, i.e.  {lambda0, X0}
   --- the optional tracking flag, 0 or 1,
The output Z is the cell whose first component is the pseudo eigenvalue and the second is the X.


Examples of using PseudoEig and PEigRefine
            (Installation of NAClab is required to repeat the examples)


Example 1.  Using PseudoEig, including finding the multiplicity support

NAClab function "MatrixWithGivenJordanForm" is convenient in quick construction of testing matrices:

>> [A,U,V] = MatrixWithGivenJordanForm([2,3],{[4,3],[2 2]});
>> A

A =

     1     1     0     1     0     0     0     1     0     0     0
    -2     2     1     0     1     0     1     0     0     0     0
    -1     0     2     3     0     1     0     1     0     0     0
     1     0     0     2    -1     0    -1     0     0     0     0
     0    -1     0     1     1     1     0     0     0     1    -1
    -4     0     1     0     3     2     3     0     0     0     0
    -1     2     0     0     1    -1     2     1     0    -1     1
    -1     1     0     1     0     0     0     3     1     0     0
     1     0     0     0     0     0     0    -1     4     0     0
     0     0     0     0     1     0     0     0     0     2     1
     0     1     0    -1     3    -1     0     0     0    -2     5


Apply conventional eigen-finder for initial eigenvalues:

>> eig(A)

ans =

  3.000000000000003 + 0.000000063443468i
  3.000000000000003 - 0.000000063443468i
  2.999999999999997 + 0.000000021680931i
  2.999999999999997 - 0.000000021680931i
  2.000131207566168 + 0.000131197500271i
  2.000131207566168 - 0.000131197500271i
  2.000003852182530 + 0.000006672920661i
  2.000003852182530 - 0.000006672920661i
  1.999992295634929 + 0.000000000000000i
  1.999868792433838 + 0.000131217576475i
  1.999868792433838 - 0.000131217576475i


Pick an approximate eigenvalue, say 1.999868792433838 + 0.000131217576475i, Find the geometric multiplicity by SVD:

>> svd(A-(1.999868792433838 + 0.000131217576475i)*eye(size(A)))

ans =

   7.486664684066285
   5.573841967969352
   3.961701858109521
   2.441112093740693
   2.253848558674539
   0.773876810987828
   0.652336598832444
   0.341709210710030
   0.089726975408410
   0.000000000002225
   0.000000000000000


Clearly, the geometric multiplicity is 2.   We can then determine the Segre anchor k by trial and error.  Start with, say k=2:

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,1.999868792433838 + 0.000131217576475i,2,2);
      Step   1:     residual =   1.82e-08
      Step   2:     residual =   6.38e-09
      Step   3:     residual =   1.66e-09
>> res,lcond
res =
     1.800832453591654e-11
lcond =
     9.970824679378054e+08


Tiny residual and large condition number means k=2 underestimates the Segre anchor.  So increase it to k=3:

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,1.999868792433838 + 0.000131217576475i,2,3);
      Step   1:     residual =   2.02e-07
      Step   2:     residual =   6.93e-14   ...

>> res, lcond
res =
     2.775557561562891e-16
lcond =
     4.162970862477419e+03


It appears that k=3 is the correct Segre anchor, the pseudo-eigenvalue

>> lambda
lambda =
  2.000000000000000 - 0.000000000000000i


The pseodo-eigenvalue approximates the exact eigenvalue to the last digit.  To confirm the Segre anchor, try increasing it to k=4:

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,1.999868792433838 + 0.000131217576475i,2,4);
      Step   1:     residual =   4.35e-02
      Step   2:     residual =   5.11e-05
      Step   3:     residual =   7.20e-05
      Step   4:     residual =   4.24e-05
      Step   5:     residual =   4.14e-05
      Step   6:     residual =   4.14e-05
      Step   7:     residual =   4.13e-05
      Step   8:     residual =   4.14e-05
      Step   9:     residual =   4.14e-05
      Step  10:     residual =   4.14e-05


The residual increased dramatically from 1e-16 to 1e-5, confirming k=4 is an overestimation of the Segre anchor.  Since the norm of the Moore-Penrose inverse of X is small, no refinement is needed:

>> norm(pinv(X))
ans =
   2.649718399297023


Example 2.  Refine the pseudo-eigenvalue

>> A =   [ 2           1           0           0           0
           0          -8           1           0           0
           0           0           2           1           0
           0           0           0           2           1
           0      -10000        1000        -100          12];

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,1.9998,1,5);
      Step   1:     residual =   7.54e-07
      Step   2:     residual =   7.62e-13
      Step   3:     residual =   8.60e-14
      Step   4:     residual =   1.20e-13
      Step   5:     residual =   9.32e-14
      Step   6:     residual =   8.96e-14
>> lambda

lambda =

   1.999999999992827


The backward error is not small enough:

>> res*norm(pinv(X))
ans =
     5.018090688429607e-08


Apply PEigRefine to refine the result:

>> lambda0 = lambda;  C0 = C; S0 = S; X0 = X;
>> [lambda,X,C,S,res,lcond] = PEigRefine(A,lambda0,X0,C0,S0)
      Step   1:     residual =   1.82e-12
      Step   2:     residual =   1.82e-12
      Step   3:     residual =   1.90e-14
      Step   4:     residual =   1.82e-12
      Step   5:     residual =   1.82e-12
      Step   6:     residual =   1.82e-12
>> lambda
lambda =
     2


The refined backward error is now tiny:

>> res*norm(pinv(X))
ans =
     1.901074022838812e-14


Special structure of the Jacobian matrix
       
 The Matlab function  peigmat.m  rearranges the Jacobian matrix in to a near upper triangular form, making it possible to take advantage of the structure to improve efficiency.

Without loss of generality, assume the matrix is in Hessenberg form.  For a 100x100 matrix A, 100x3 matrix C,  3x3 S:

>> M = peigmat(lambda,X,A,C,S);
>> size(M)

ans =

   159   151

>> spy(M)


Sturectur of Jacobian




Example 4:  Contradicting sensitivity measures of the same eigenvalue


>> A = [1503/500,2,201/200,-1001/1000,-1/500,-1/1000,-1/1000,-1;
5,2,5,-1,-2,-1,-1,0;
-2503/500,-3,-601/200,2001/1000,1501/500,2001/1000,1/1000,2;
-6,-1,-6,3,5,3,0,1;
-5,-1,-5,1,6,3,0,1;
1,0,1,0,-1,1,0,0;
-4,-2,-4,1,3,2,2,2;
5,0,5,-1,-2,-1,-1,2]

>> e = eig(A)

e =
  2.001415092944351 + 0.002122713121581i
  2.001415092944351 - 0.002122713121581i
  1.998908958350295 + 0.002124009818083i
  1.998908958350295 - 0.002124009818083i
  2.002684635776857 + 0.000000000000000i
  1.997667261633844 + 0.000000000000000i
  2.000000055770595 + 0.000000000000000i
  1.999999944229407 + 0.000000000000000i


By PseudoEig

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,e(1),2,2);
      Step   1:     residual =   9.84e-06
      Step   2:     residual =   9.82e-11
      Step   3:     residual =   6.38e-16
      Step   4:     residual =   5.96e-16
      Step   5:     residual =   4.44e-16
      Step   6:     residual =   5.15e-16
      Step   7:     residual =   5.05e-16
      Step   8:     residual =   9.33e-16
>> lambda, lcond

lambda =

  2.000000000000000 - 0.000000000000000i


lcond =

     3.304315987763452e+02

The 2x2 condition number is manageable at 3.3e+02, while the sensitivity measured by spectral projector norm is huge at  4.4e14

NAClab function NumericalJordanForm calculates the numerical Jordan Canonical Form

>> [J,X,e,s,t] = NumericalJordanForm(A,1e-9);
>> format short
>> J

J =

    2.0028         0         0         0         0         0         0         0
         0    2.0000   -0.8321         0         0         0         0         0
         0         0    2.0000    0.2536         0         0         0         0
         0         0         0    2.0000   -1.1073         0         0         0
         0         0         0         0    2.0000    5.2368         0         0
         0         0         0         0         0    2.0000         0         0
         0         0         0         0         0         0    2.0000   -5.3852
         0         0         0         0         0         0         0    2.0000



With a high overall condition number

>> cond(X)

ans =

   5.0956e+13


even though the Jordan structure for the eigenvalue 2.00000000000 is not highly sensitive

>> t(2,2)

ans =

  178.2683


If we use a loose error tolerance 1e-5, the numerical Jordan form becomes

>> [J,X,e,s,t] = NumericalJordanForm(A,1e-5);
>> single(J)

ans =

   2.0001249   0.4962730           0           0           0           0           0           0
           0   2.0001249   0.8362637           0           0           0           0           0
           0           0   2.0001249  -0.3181596           0           0           0           0
           0           0           0   2.0001249   1.1284051           0           0           0
           0           0           0           0   2.0001249  18.9835510           0           0
           0           0           0           0           0   2.0001249           0           0
           0           0           0           0           0           0   2.0001249   5.3860307
           0           0           0           0           0           0           0   2.0001249


The residual is within the error tolerance and condition number is not large:

>> t(1), t(2)
ans =
     2.128714409525071e-06
ans =
     1.590906220366132e+02


Example 1.  Identifying the multiplicity support:

>> A

A =

  0   4   0  -4   0  -2   1   0   0  -1  -1  -1  -1   2   1   0   0  -1   0   0
  0   3   3  -4   1   0   4   1  -1  -1   0   0   0   0   0  -2   0   0  -1  -1
  1  -4   2  10  -3   1  -7  -2  -1   3   2   0   0  -1   0   3  -1   1   1   2
 -1  -1   2   5  -2  -1  -5  -1  -1   2   0  -1   0   1   0   2  -1  -1  -1   1
 -1  -2   2   1   1   1  -1   0  -2   1   0   0   0   1   0   0  -1  -1   0   0
  1   4   1 -12   4   2  13   3   0  -4   0   0  -2  -1   1  -6   1   1   0  -3
 -1  -1   1   5  -2   0  -4  -2   0   1  -1   0   0   1   0   4   0  -1  -1   2
  1   2  -4   0   1   0   1   1   4  -2  -1   1   0  -1   0   1   2   1   0   1
  0  -5   2  10  -5  -1 -10  -2  -1   6   3  -2   0   0   0   3  -3   0   1   2
  1   1   1  -1   2   2   4   0   1  -1  -1   2   0  -1   0   0   2   1  -1   0
  1  -1   0   2   1   2   1   0   1  -1   3   2   0  -1   0   0   1   1  -1   0
 -1  -3   0   5  -1   2  -4  -1   0   1  -1   4   4   1  -2   2   0  -1   0   1
 -2   0   0  -1   0   0   0   0   0   0  -1   0   3   2   0   0   0  -1   0   0
 -3   4  -1  -4   0  -2   1   0   0  -1  -1  -1  -2   5   2   0   0  -1   1   0
 -2   0   0  -1   0   0   0   0   0   0  -1   0   0   2   3   0   0  -1   0   0
  0   0  -2   3   1   2  -1  -1   2  -2  -2   2   0   0   0   5   2   0   0   1
  6   3  -6   3   6   4   7   0   7  -7   1   5  -2  -6   1   0   8   6  -1   0
  0   2  -4  -4   1  -1   4   1   0  -1   0  -1  -1   0   1  -2   0   3   4  -1
  1  -4  -1  11  -4   1  -8  -3  -1   3   2   0   0  -1   0   4  -1   1   4   3
  0   0  -1   1  -2   0  -1  -2   1   0   0   0   0   0   0   1   0   0   0   4


Use the Matlab eig to find eigenvalue estimates:

>> diag(D)

ans =

  3.001426431341224 + 0.000000000000000i
  3.000624722484589 + 0.000453755506064i
  3.000624722484589 - 0.000453755506064i
  3.000440399461022 + 0.001356489469455i
  3.000440399461022 - 0.001356489469455i
  2.999761500834580 + 0.000734576671243i
  2.999761500834580 - 0.000734576671243i
  2.998846384868383 + 0.000837895621509i
  2.998846384868383 - 0.000837895621509i
  2.999227553361644 + 0.000000000000000i
  1.999831389084806 + 0.000168679704578i
  1.999831389084806 - 0.000168679704578i
  2.000168610915185 + 0.000168542182678i
  2.000168610915185 - 0.000168542182678i
  1.999989850392488 + 0.000017578122305i
  1.999989850392488 - 0.000017578122305i
  1.999993977339806 + 0.000000000000000i
  2.000020299214969 + 0.000000000000000i
  2.000003011330126 + 0.000005216432308i
  2.000003011330126 - 0.000005216432308i


The eigenvalue D(1,1) =  3.001426431341224 + 0.000000000000000i   appears to be defective since the condition number is high:

>> 1/abs(V(:,1)'*W(:,1))

ans =

     3.096614239827069e+11



The geometric multiplicity is clearly two from the numerical nullity identified via singular values 

>> s = svd(A-e(1)*eye(size(A)))

s =

  40.024122556006361
  22.819868881416550
  11.340731455999819
   9.397825324315559
   6.260902607815194
   4.358978843241264
   3.295092277976436
   2.208518179170260
   2.014640549400185
   1.237827273324948
   0.661297941705164
   0.546549612836940
   0.316193846463476
   0.194202964265260
   0.108370106835163
   0.070055683229182
   0.054676439994994
   0.036243966725033
   0.000000000000002
   0.000000000000001


To identify the Segre anchor k,  run PseudoEig with k = 1, 2, ...

>> for k = 1:6, [lambda,X,C,S,res,lcond] = PseudoEig(A,e(1),2,k); disp([k,res,lcond]); end

     2.000000000000000e+00     1.467949463967694e-13     4.632744825087318e+14
     3.000000000000000e+00     1.245977341450316e-10     1.515278877406559e+10
     4.000000000000000e+00     2.365260962187022e-09     3.476672400527993e+09
     5.000000000000000e+00     1.390184611834986e-15     5.791114059263334e+04
     6.000000000000000e+00     2.622216751629970e-03     1.114112133754904e+05


The Segre anchor is  k = 5.

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,e(1),2,5);
>> lambda

lambda =

     3.000000000000086e+00


Example 2.  Refine the pseudo-eigenvalue by orthogonalization


By PseudoEig,  we get an approximate value of the eigenvalue

>> A

A =

           2           1           0           0           0
           0          -8           1           0           0
           0           0           2           1           0
           0           0           0           2           1
           0      -10000        1000        -100          12

>> [lambda,X,C,S,res,lcond] = PseudoEig(A,1.999,1,5);
>> lambda

lambda =

     1.999999999997139e+00

The backward error:

>> res*norm(pinv(X))

ans =

     3.671544140185711e-08


Refine the pseudo-eigenvalue by orthogonalization:


>> lambda0 = lambda; X0 = X; C0 = C; S0 = S;
>> [lambda,X,C,S,res,lcond] = PEigRefine(A,lambda0,X0,C0,S0);
>> lambda

lambda =

     2

>> res*norm(pinv(X))

ans =

     2.874961099398239e-14


The backward error is reduced to 2.9e-14 from  3.7e-8


Example 3.  Refine pseudo-eigenvalue on a perturbed matrix

>> A

A =

           2           1           0           0           0
           0          -8           1           0           0
           0           0           2           1           0
           0           0           0           2           1
           0      -10000        1000        -100          12

>> E

E =

  -0.092000000000000  -0.653000000000000  -0.201000000000000  -0.416000000000000  -0.787000000000000
  -0.135000000000000  -0.218000000000000   0.054000000000000  -0.136000000000000  -0.255000000000000
   0.651000000000000   0.663000000000000  -0.166000000000000  -0.969000000000000  -0.603000000000000
  -0.833000000000000   0.607000000000000   0.314000000000000   0.969000000000000  -0.020000000000000
  -0.733000000000000  -0.879000000000000   0.256000000000000  -0.665000000000000  -0.321000000000000

>> B = A+1e-5*E

B =

   1.0e+04 *

   0.000199999908000   0.000099999347000  -0.000000000201000  -0.000000000416000  -0.000000000787000
  -0.000000000135000  -0.000800000218000   0.000100000054000  -0.000000000136000  -0.000000000255000
   0.000000000651000   0.000000000663000   0.000199999834000   0.000099999031000  -0.000000000603000
  -0.000000000833000   0.000000000607000   0.000000000314000   0.000200000969000   0.000099999980000
  -0.000000000733000  -1.000000000879000   0.100000000256000  -0.010000000665000   0.001199999679000


From the estimated eigenvalue 1.99

>> [U,T,V] = svd(B-1.99*eye(5));
N = V(:,end);
[lambda,X,C,S,res,lcond] = PseudoEig(B,1.99,1,5,N);
>> lambda

lambda =

   2.004413315460551


Backward error

>> res*norm(pinv(X))

ans =

   0.059004303654087



Refine:

>> lambda0 = lambda; X0 = X; C0 = C; S0 = S;
>> [lambda,X,C,S,res,lcond] = PEigRefine(B,lambda0,X0,C0,S0);
      Step   1:     residual =   2.20e-02
      Step   1:     residual =   1.89e-02
      Step   2:     residual =   2.06e-06
      Step   3:     residual =   2.06e-06
      Step   4:     residual =   2.06e-06
      Step   5:     residual =   2.06e-06
>> lambda

lambda =

   1.999998778227778



With backward error:

>> res*norm(pinv(X))

ans =

     9.603166323203087e-06