Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js

P3

Topics for the Day

  • Hessian of Elastic Energy

  • Implicit Integration

  • Nonlinear optimization.

✅ 隐式积分不讲了。推荐P12页的论文
直接跳到 P18、非线性优化。

P4

Hessian of Elastic Energy

P5

Recall that…

P6

Energy Hessian:

P7

SVD Derivative

Since F=UΛVTis singular value decomposition (Λ=diag(λ0,λ1,λ2)), we can do:

UTFFklV=UT(UFklΛVT+UΛFklVT+UΛVTFkl)V

UTFFklV=(UTUFkl)Λ+ΛFkl+Λ(VTFklV)

UTFFklV=AΛ+ΛFkl+ΛB

P8

Skew-Symmetric Matrix

Matrix A is skew-symmetric (or anti-symmetric), if A=AT:

A=[0aba0cbc0]

If D is diagonal, then:

AD=[0aba0cbc0][d000e000f]=[0???0???0]

DA=[d000e000f][0aba0cbc0]=[0???0???0]

When U is orthogonal, we have:

0=(UTU)Fkl=UTUFkl+UTFklU=UTUFkl+(UTUFkl)T

Therefore, A=UTUFkl is skew-symmetric. So is B=VTFklV.

P9

SVD Derivative (cont.)

Since F=UΛVT is singular value decomposition Λ=diag(λ0,λ1,λ2), we can do:

UTFFklV=AΛ+ΛFkl+ΛB

After expansion, we get:

UTFFklV=[0a0a1a00a2a1a20][λ0λ1λ2][λ0Fklλ1Fklλ2Fkl][λ0λ1λ2][0b0b1b00b2b1b20]

P10

SVD Derivative (cont.)

Since F=UΛVT is singular value decomposition Λ=diag(λ0,λ1,λ2), we can do:

UTFFklV=AΛ+ΛFkl+ΛB

After expansion, we get:

UTFFklV=[λ0/Fklλ1a0+λ0b0λ2a1+λ0b1λ0a0λ1b0λ1/Fklλ2a2+λ1b2λ0a1λ2b1λ1a0a2λ2b2λ2/Fkl]

[m00m01m02m10m11m12m20m21m22]=[λ0/Fklλ1a0+λ0b0λ2a1+λ0b1λ0a0λ1b0λ1/Fklλ2a2+λ1b2λ0a1λ2b1λ1a0a2λ2b2λ2/Fkl]

Eventually, we get: A=UTUFkl,B=VTFklV and λ0/Fkl,λ1/Fkl,λ2/Fkl.

P11

A Quick Summary

  • Step 1: By SVD derivatives, we get: UFkl,λdFkl,VTFkl.

  • Step 3: Finally, we reach our goal in Hessian matrix:

fixj=k,lfiFklFklxj=k,l(Udiag(Wλ0,Wλ1,Wλ2)VT)FklVdiFklxj

P12

After-Class Reading

Xu et al. 2015. Nonlinear Material Design Using Principal Stretches. TOG (SIGGRAPH).

Definitely read this paper if you decide to implement it.

P13

Implicit Integration

P14

Implicit Integration

Recall that we need implicit integration to avoid numerical instability (Class 5, page 13):

{v[1]=v[0]+tM1f[1]x[1]=x[0]+tv[1]

or

{x[1]=x[0]+tv[0]+t2M1f[1]v[1]=(x[1]x[0])/t

We also said that:

x[1]=x[0]+tv[0]+t2M1f(x[1])

x[1]=argminF(x)forF(x)=12t2||xx[0]tv[0]||2M+E(x)

P15

Newton-Raphson Method

The Newton-Raphson method, commonly known as Newton’s method, solves the optimization problem: x[1]=argmin F(x). (F(x) is Lipschitz continuous.)

Given a current x(k), we approximate our goal by:

0=F(x)F(x(k))+F

P16

Newton-Raphson Method

Now we can apply Newton’s method to: \mathbf{x} ^{[1]}= argmin  F (\mathbf{x} ).
Given a current \mathbf{x}^{(k)}, we approximate our goal by:

\mathbf{0} =∇F(\mathbf{x} )≈∇F(\mathbf{x} ^{(k)})+\frac{∂F^2(\mathbf{x} ^{(k)})}{∂\mathbf{x} ^2} (\mathbf{x} −\mathbf{x} ^{(k)})

Newton’s Method
Initialize x^{(0)}
For k=0…K
∆\mathbf{x} \longleftarrow −(\frac{∂F^2(\mathbf{x} ^{(k)})}{∂\mathbf{x} ^2})^{−1}∇F(\mathbf{x} ^{(k)}) \mathbf{x} ^{(k+1)}\longleftarrow \mathbf{x} ^{(k)}+∆\mathbf{x} If ||∆\mathbf{x}|| is small \quad\quadthen break
\mathbf{x} ^{[1]}\longleftarrow \mathbf{x} ^{(k+1)}

P17

Simulation by Newton’s Method

Specifically to simulation, we have:

F (\mathbf{x} )=\frac{1}{2∆t^2} ||\mathbf{x} −\mathbf{x} ^{[0]}−∆t\mathbf{v} ^{[0]}||_M^2+E(\mathbf{x} )

∇F(\mathbf{x} ^{(k)})=\frac{1}{∆t^2}\mathbf{M} (\mathbf{x} ^{(k)}−\mathbf{x} ^{[0]}−∆t\mathbf{v} ^{[0]})−\mathbf{f} (\mathbf{x} ^{(k)})

\frac{∂^2F(\mathbf{x} ^{(k)})}{∂\mathbf{x} ^2} =\frac{1}{∆t^2}\mathbf{M} +\mathbf{H} (\mathbf{x} ^{(k)})

Initialize \mathbf{x}^{(0)}, often as \mathbf{x} ^{[0]} or \mathbf{x} ^{[0]} +∆t\mathbf{v} ^{[0]}
For k=0…K
\mathrm{Solve}\quad (\frac{1}{∆t^2} \mathbf{M+H} (\mathbf{x} ^{(k)}))∆\mathbf{x} =− \frac{1}{∆t^2}\mathbf{M} (\mathbf{x} ^{(k)}−\mathbf{x} ^{[0]}−∆t\mathbf{v} ^{[0]})+\mathbf{f} (\mathbf{x} ^{(k)}) \mathbf{x} ^{(k+1)}\longleftarrow \mathbf{x} ^{(k)}+∆\mathbf{x} If ||∆\mathbf{x}|| is small \quad\quad then break
\mathbf{x} ^{[1]}\longleftarrow \mathbf{x} ^{(k+1)} \mathbf{v} ^{[1]}←(\mathbf{x} ^{[1]}−\mathbf{x} ^{[0]})/ ∆t


本文出自CaterpillarStudyGroup,转载请注明出处。

https://caterpillarstudygroup.github.io/GAMES103_mdbook/