<\body> AM117 Sample Solutions, HW#5 Results: <\with|par-par-sep|0> Discussion: The error bounds are kept in all cases, as expected, though with varying degrees of accuracy. (Once the error bound is even achieved.) Code: <\with|par-par-sep|0> Results: <\with|par-par-sep|0> Discussion: As expected, the bounds are kept. However, use this exercise as a warning: In b), a small change in > causes a ``relatively small'' bound, but it happens to cause a large change in >, bringing that change close to the predicted bound. In c), however, the change in > is large, but the change in > is small--and a good ways away from the bound. Thus observe that these estimates are very dependent on \ how ``sensitive'' the solution process is in the direction of >. The condition number tells you about the direction, which your > may not lie in, as for example in c), causing unnecessarily large bounds. Note however that a change of the same ratio as in c) would have almost made us achieve the bound in b). Code: <\with|par-par-sep|0> Results: <\with|par-par-sep|0> Discussion: All bounds are kept. Code: <\with|par-par-sep|0> a) The best way to try and show that we've designed an algorithm that correctly computes the Crout LU decomposition of a matrix (short of actually it...) is to code it up and see if it works. For clarity, we're looking for a decomposition of the type <\equation*> >||||>|>|>|||>|>|>|>||>||>|>|>|>|||>|>|>>>>>|>|>||>|||>|>|>|||>|>|>>|||||>>|||||>>>>=>|>|>||>|>|>|>|>|>|>|>|>|>|>>||>|>|>|>>|||>|>|>>>>> So, here goes some code: <\with|par-par-sep|0> And indeed, it works! See here: <\with|par-par-sep|0> It is a trivial excercise for the reader :) to extract the algorithm in mathematical notation from the above code. b) To count the operations, realize the Crout decomposition effectively goes through the matrix column by column, filling up each of them from top to bottom. Thus we may associate an operations count with each non-zero entry of the matrix: <\equation*> ||||||>||||>|||>||||>|||>||||>|||>||||>|||>||||>|||>|||||||>>>> Note that the computation of is associated with the entries on the diagonal and below, while the entries above the diagonal are associated with . From this table, it is easy to see that the first two and last two columns together add operations. Further, we have columns in the middle with operations each, yielding <\equation*> 10(n-4)+25=10n-15. operations. Observe that while this result is still linear in , the constants have grown more than linearly when compared to the correspdonding result for tridiagonal matrices. c) Like above, we will simply jot down the number of operations for each element of the factored matrices required for back- and forward substitution: <\equation*> |||||>|||||>|||||>|||||>|||||>>>> Thus we obtain <\equation*> 4(n-2)+2=4n-6 for the backsubstitution step. For forward substitution, we write <\equation*> |||||>|||||>|||||>|||||>|||||>>>> and immediately see that we need <\equation*> 5(n-2)+4=5n-6 steps. All in all, the substitution process needs <\equation*> 9n-12 steps. <\initial> <\collection>