Purpose
To understand backward stability

> HHT <- function(A) {
+     A.bkup <- A
+     m <- dim(A)[1]
+     n <- dim(A)[2]
+     norm2 <- function(x) {
+         sqrt(sum(x^2))
+     }
+     Q <- matrix(data = 0, nrow = (n * m), ncol = m)
+     k <- 1
+     for (k in 1:n) {
+         Qt <- diag(m)
+         x <- A[k:m, k]
+         e1 <- c(1, rep(0, (length(x) - 1)))
+         Vk <- sign(x[1]) * norm2(x) * e1 + x
+         Vk <- Vk/norm2(Vk)
+         Qt[k:m, k:n] <- diag(length(x)) - 2 * Vk %*% t(Vk)
+         A[k:m, k:n] <- A[k:m, k:n] - 2 * Vk %*% t(Vk) %*% A[k:m,
+             k:n]
+         start.ind <- k * n - m + 1
+         end.ind <- k * n
+         Q[start.ind:end.ind, ] <- Qt
+     }
+     Q.f <- diag(m)
+     for (i in 1:m) {
+         start.ind <- i * n - m + 1
+         end.ind <- i * n
+         Q.f <- Q.f %*% Q[start.ind:end.ind, ]
+     }
+     result <- list(A = A.bkup, A.t = A, Q = Q.f)
+ }
> library(Matrix)
> N <- 2500
> m <- 50
> n <- 50
> set.seed(12345)
> R1 <- matrix(rnorm(N), nrow = m, ncol = n)
> for (j in 1:(n - 1)) R1[(j + 1):m, j] <- 0
> B <- matrix(rnorm(N), nrow = m, ncol = n)
> Q1 <- qr.Q(qr(B))
> A <- Q1 %*% R1
> Q2 <- HHT(A)$Q
> R2 <- HHT(A)$A.t
> Q2[, n] <- Q2[, n] * -1
> R2[m, n] <- R2[m, n] * -1
> normv <- function(x) max(svd(x)$d)

Norm Comparision for Q

> print(normv(Q2 - Q1))
[1] 0.0009808559

Norm Comparision for R

> print(normv(R2 - R1)/normv(R1))
[1] 0.0001509662

These errors are huge! Our calculations have been done with sixteen digits of accuracy, yet the final results are accurate to only two or three digits. The individual rounding errors have been amplified by factors on the order of 1013.

Norm Comparision for A

> print(normv(A - Q2 %*% R2))
[1] 1.358075e-14

The product Q2R2 is accurate to a full fifteen digits! The errors in Q2 and R2 must be “diabolically correlated,” as Wilkinson used to say.

Lets do a random perturbation in Q1 and R1

> temp1 <- matrix(rnorm(N), nrow = m, ncol = n)
> temp2 <- matrix(rnorm(N), nrow = m, ncol = n)
> Q3 <- Q1 + (10^-4) * temp1
> R3 <- R1 + (10^-4) * temp2

Norm Comparision for A

> print(normv(A - Q3 %*% R3))
[1] 0.01007423

This time, the error in the product is huge. Q2 is no better than Q3, and R2 is no better than R3, but Q2R2 is twelve orders of magnitude better than Q3R3.

The errors in Q2 and R2 are forward errors.In general, a large forward error can be the result of an ill-conditioned problem or an unstable algorithm. In our experiment, they are due to the former.

As a rule, the sequences of column spaces of a random triangular matrix are exceedingly ill-conditioned as a function of the entries of the matrix.

TAKEAWAY
We have seen that Householder triangularization is backward stable but not always accurate in the forward sense. (The same is true of most of the matrix factorizations of numerical linear algebra.) Now, QR factorization is generally not an end in itself, but a means to other ends such as solution of a system of equations, a least squares problem, or an eigenvalue problem. Is its backward stability enough to make it a satisfactory piece of a larger algorithm? That is, is accuracy of the product QR enough for applications, or do we need accuracy of Q and R individually? The happy answer is that accuracy of QR is indeed enough for most purposes. We can show this by surprisingly simple arguments.