• There are three ways to compute pseudo-inverse of a matrix $A$
    • If $A$ has independent columns, then $A^{+} = (A^TA)^{-1}A^T$ and so $A^{+}A = I$
    • If $A$ has independent rows, then $A^{+} = A^T(A^TA)^{-1}$ and so $AA^{+} = I$
    • A diagonal matrix $\Sigma$ is inverted wherever possible - otherwise $\Sigma^{+}$ has zeros
  • Typically one comes across the solution to $Ax=b$ presented as $x = A^{+}b=$. However using psuedo inverse is not the right way to solve computationally. Why ?
    • Every matrix has a pseudo-inverse. The pseudo-inverse contains $1/\sigma_k$ for all non zero elements of $\Sigma$. However it is meant to contain 0’s for all zeros present in $\Sigma$. To know when a number is exactly 0 is an extremely rigid requirement
  • Generalized SVD - Two matrices are factorized simultaneously
    • $A$ and $B$ can be factored in to $A = U_A \Sigma_A Z$ and $B = U_B \Sigma_B Z$
    • $U_A$ and $U_B$ are orthogonal matrices
    • $\Sigma_A$ and $\Sigma_B$ are positive diagonal matrices
    • $Z$ is an invertible matrix