Processing math: 88%

P26

Matrix: Basics

Matrix: Definition

A real matrix is a set of real elements arranged in rows and columns.

A=[a00a01a02a10a11a12a20a21a22]=[a0a1a2]R3×3

AT=ASymmetric

P27

Matrix: Multiplication

How to do matrix-vector and matrix-matrix multiplication? (Omitted)

  • ABBA(AB)x=A(Bx)
  • (AB)T=BTAT(ATA)T=ATA
  • Ix=xAI=IA=A
  • A1:AA1=A1A=Iinverse
  • (AB)1=B1A1
  • Not every matrix is invertible, e.g., A=[000000000]

P28

Matrix: Orthogonality

An orthogonal matrix is a matrix made of orthogonal unit vectors.

A=[a0a1a2]suchthataTiaj={1 if i=j(unit)0. if ij(orthogonal)

ATA=[aT0aT1aT2][a0a1a2]=[aT0a0aT0a1aT0a2aT1a0aT1a1aT1a2aT2a0aT2a1aT2a2]=I

AT=A1

P29

Matrix Transformation

A rotation can be represented by an orthogonal matrix.

xyz 是世界坐标系、 uvw 是局部坐标系,旋转矩阵是局部坐标系在世界坐标系中的状态的描述

P30
A scaling can be represented by a diagonal matrix.

P31

矩阵分解

Singular Value Decomposition

A matrix can be decomposed into:
A=UDVTsuch that D is diagonal,and U and V are orthogonal.
D 的对角线元素是singular values(奇异值)

Any linear deformation can be decomposed into three steps: rotation, scaling and rotation:

✅ rotation scaling rotation 分别对应 VT2,D,U. 注意顺序!!!
所有 A 都能做 SVD

P32

Eigenvalue Decomposition

A symmetric matrix can be decomposed into:
A=UDU1such that D is diagonal,and U is orthogonal.
D 的对角线元素是eigenvalues(特征值)

ED 看作是SVD的特例,仅应用于对称矩阵,此时 U=V
U 是正交矩阵,因此也可写成 A=UVUT

As in the textbook
Let U=[ui], we have:
Aui=UDUTui=UD[010]=U[0di0]=diui U: 是 the eigenvector of di
di: 是 eigenualue

We can apply eigenvalue decomposition to asymmetric matrices too, if we allow eigenvalues and eigenvectors to be complex. Not considered here.

✅ complex:复数
图形学不考虑虚数,因此也不考虑非对称矩阵的 ED

P33

Symmetric Positive Definiteness (s.p.d.)

定义

A is s.p.d. if only if: vTAv>0, for any v0.

A is symmetric semi-definite if only if: vTAv0, for any v0.

✅ 计算矩阵的有限元或 Hession 时会用到正定性

What does this even mean???

怎么理解SPD

d>0vTdv>0, for any v0.

d0,d1,>0vTDv=vT[di]v>0, for any v0.

✅ 一堆大于零的实数组成一个对角矩阵, 公式1的扩展

d0,d1,>0vT(UDUT)v=vTUUT(UDUT)UUTv

U orthogonal =(UTv)T(D)(UTv)>0, for any v0

✅ 公式3是公式2的扩展

P34

怎么判断SPD

  • A is s.p.d. if only if all of its eigenvalues are positive:
    A=UDUT and do,d1,>0.

  • But eigenvalue decomposition is a stupid idea most of the time, since it takes lotsof time to compute.

✅ 实际上不会通过 ED 来判断矩阵的正定性。因为ED的计算量很大。

  • In practice, people often choose other ways to check if A is sp.d. For example,

aii>ij|aij| for all i
A diagonally dominant matrix is p.d.

[430153809]4>3+05>1+39>8

✅ 对角占优矩阵必定正定,正定不一定对角占优

  • Finally, a s.p.d.matrix must be invertible:
    A1=(UT)1D1U1=UD1UT.

P35

例子

Prove that if A is s.p.d., then B=[AAAA]is symmetric semi-definite.

For any x and y, we know:

[xTyT]B[xy]=[xTyT][AAAA][xy]

=xTA(xy)yTA(xy)=(xy)TA(xy)

Since A is sp.d., we must have:

[xTyT]B[xy]0

P36

Linear Solver

Many numerical problems are ended up with solving a linear system:

It's expensive to compute A1, especially if A is large and sparse. So we cannot simply do:x=A1b.

There are two popular linear solver approaches: direct and iterative.

✅ 当 A 是稀疏时. A1通常不是稀疏。 如果 A 很大, A1会占用大量空间

P37

Direct Linear Solver

方法

A direct solver is typically based LU factorization, or its variant: Cholesky, LDL, etc…

LU 可用于非对称矩阵。
Cholesky 和 LDL 仅用于对称矩阵,但内存消耗更少。
这里不介绍如何做LU分解

A=LU=[l00l10l11][un1,n1un1,nun,n] lower triangular upper triangular

P38

分析

  • When A is sparse, L and U are not so sparse. Their sparsity depends on the permutation.(See matlab)

LU 和稀疏性与行列顺序有关,因此通常在LU 分解之前做 permutation,使得到比较好的顺序。

  • lt contains two steps: factorization and solving. lf we must solve many linear systems with the same A , we can factorize it only once.

LU 分解是计算量的大头,只做一次 LU 分解,能省去大量计算。

  • Cannot be easily parallelized:Intel MKL PARDISO

P39

Iterative Linear Solver

An iterative solver has the form:

Why does it work?

\begin{matrix} \mathbf{b−Ax} ^{[k+1]} =\mathbf{b−Ax} ^{[k]}−\mathbf{αAM} ^{−1}(\mathbf{b−Ax} ^{[k]}) \\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad=(\mathbf{I−αAM} ^{−1})(\mathbf{b−Ax} ^{[k]}) =(\mathbf{I−αAM} ^{−1})^{k+1}(\mathbf{b−Ax} ^ {[0]}) \end{matrix}

So,

\mathbf{b−Ax} ^{[k+1]}→0, if ρ(\mathbf{I−αAM} ^{−1})<1.

\mathbf{b-Ax}^{[k+1]} 代表下一时的残差,迭代要想收敛,\mathbf{b-Ax}^{[k+1]} 应趋于0

\rho:矩阵的spectral radius (the largest absolute value of the eigenvalues)

✅ 不会真的去算 \rho,而是调α,试错。 因为求特征值的代价比较大

P40
\mathbf{M} must be easier to solve:

\mathbf{M} =\mathrm{diag} (\mathbf{A} )
Jacobi Method

\quad

\mathbf{M} =\mathrm{lower} (\mathbf{A} )
Gauss-Seidel Method

The convergence can be accelerated: Chebyshev, Conjugate Gradient, … (Omitted here.)

优点:

  • simple
  • fast for inexact solution
  • paralleable

缺点:

  • convergence condition

✅ 例如要求M是正定的或对角占优的

  • slow for exact solution

本文出自CaterpillarStudyGroup,转载请注明出处。

https://caterpillarstudygroup.github.io/GAMES103_mdbook/