> <\body> AM117 Sample Solutions HW #4 <\itemize> When you're counting operations of some algorithm, you need to show code (be it Matlab or Pseudo-) that matches your counts. Otherwise there's no telling where your results came from. When you find a result using facts from the book, then always mention what you are using. For example, it will in general not be enough to say ``This matrix is s.p.d.'' even if that is so. You need to give a reason why you believe so. Be very aware of what a and what a condition is. You might want to review what you learnt in calculus about conditions for local extrema of functions to help you along with that. In particular, the theorem on p.213 in the book gives conditions that to be true for every spd matrix. That means, if they are not satisfied, then it is safe to conclude that the matrix is not spd. But even if all the conditions are true, the matrix may still not be spd: the conditions are . (that is, unless you can point to a proof of that fact.) When counting operations, try not to initialize variables with zero and then add in a loop. If you do, the first operation is always ``>'', which could just as well be handled by an assignment without any arithmetic. If you do implement and count ``>'' as such, you will generally miss the operation count that the book gives. For the Cholesky back-/forward-substitution step, realize that a transpose costs > operations, even if they're not arithmetic ones. The goal of the problem was to write an algorithm that avoids this transpose, which did. a) <\equation*> )|l|>=+5+>===6, <\equation*> )|l>|>=max{\|3\|,\|-5\|,\|\|}=5. b) <\equation*> |l|>==\5.477. <\equation*> |l>|>=4. c) <\equation*> |l|>===9. <\equation*> |l>|>=8. d) <\equation*> ,-6,4,2)|l|>==\8.246. <\equation*> ,-6,4,2)|l>|>=6. e) <\equation*> ,-1)|l|>=+\+1>\4.273. <\equation*> ,-1)|l|>=\. a) Let <\equation*> A=|>||>>>>. Let's do the straightforward thing first: >|>=max{5+4,1+7}=9.> To calculate |>>, we need a couple more steps. First, we need A>: <\equation*> AA=|>||>>>>. Next, we need the eigenvalues of A>. To that end, we write down the charateristic polynomial of : <\equation*> det(AA-\I)=>|>||>>>>>=(26-\)(65-\)-27=0. The solutions of that equation are <\equation*> \(AA)=3|2>. Thus,\ <\equation*> \(AA)=max{\|\\|,\|\\|}=|2>\78.805, and we obtain |>=(AA)>\8.877.> b) Let <\equation*> A\|>||>>>>,AA=|>||>>>>,\(AA)=15\5. Thus |>=>\5.1167>. Slightly more straightforward, >|>=4+2=6.> c) Let <\equation*> A\||>|||>|||>>>>, AA=||>|||>|||>>>>. Here, obtaining the roots of <\equation*> det(AA-\I)=-\+51l-582\+1296=0 analytically really does not help much--I also don't really know how to do it. Sure, I can let a computer algebra system loose on it, but that only results in huge terms that are not necessarily helpful. So, we'll compute these eigenvalues and obtain: <\with|par-par-sep|0> using <\with|par-par-sep|0> Further, it is not hard to see that >|>=7>. d) Again, we will simply compute the eigenvalues and the norm: <\with|par-par-sep|0> Straightforwardly, >|>=11>. A few helpful preliminaries before we start: There are > entries in a square matrix--easy, right? But how many entries are in a triangular matrix including the diagonal? The answer is /2+n/2>, and this is simply one way of writing Gauss's sum formula <\equation*> k=, which is easily shown by induction. (If you're not absolutely sure how this would work, please go ahead and do it. It's really quick.) If you don't count the diagonal in a triangular matrix, you get less entries, i.e. /2-n/2>. a) The quickest way to the goal is realizing that the only difference between Gaussian Elimination for LU and Gaussian Elimination for solving linear systems is that Gauss LU does not have to update the right-hand side. That saves one subtraction and one multiplication per inner loop, of which there is one for each entry below the diagonal, i.e. /2-n/2>, as we learned above. All in all, we save -n> operations. p. 155 in the book shows that Gauss for solving takes <\equation*> n+n-n operations, so Gauss for LU takes <\equation*> n+n-n-(n-n)=n-n-n, which is the value we're looking to prove. There is of course also the long way around involving operation-counting and induction proofs, if you like long-winded error-prone methods. b) It is shown in the book that backsubstitution with a general triangular matrix (such as the result of Gaussian elimination) takes > operations (cf. p. 156). Doing this twice would result in > operations, but that does not take into account that either or can be chosen so that they have ones on the diagonal. Inspection of the pseudocode on p. 155f yields that that saves us divisions (the first and last line of the code, respectively). Altogether we get an operation count of -n>, as claimed. c) What we are asked to do is essentially provide an operation count for matrix multiplication. Here's some quick and dirty code: <\with|par-par-sep|0> Each outer loop has additions and multiplications, so operations total. The outer loop is run times, so we have <\equation*> n(2n-1)=2n-n operations, as claimed. The following table summarizes the results: |||||>||>>|> (diag.dom.)>>||>>|> (not sym.)>>||>>|>(\aa>)>>||>>|> (see below)>>||>>|> (not sym.)>>||>>|> (\0)>>>>>> The Theorem on p. 213 has many useful criteria that allow us to quickly conclude that a matrix is positive definite--the remarks in parentheses refer to the condition that is violated. Similarly, the corollary on p. 214 is a quick positive result, which we can use in part a). This leaves us with only one matrix to check explicitly: the one in d). For this matrix, we'll solve <\equation*> >||>|>|>|>|>|>|>>>>>>|>|>>||>|>>|||>>>>>=||>|||>|||>>>> by direct factorization. We obtain <\equation*> >||>|>|>|>|>|>|>>>>>=||>||>|>||>|>>>>. Since the Cholesky decomposition works if and only if a matrix is s.p.d., we may conclude that this is the case for the matrix in d). Consider the following (already annotated) code for the Cholesky decomposition: <\with|par-par-sep|0> According to it, we have <\equation*> 2(k-1)+[2(k-1)+1] operations. (As a help when doing problems like these: You might want to figure out how summing works in your favorite computer algebra system. That way, after you obtain the non-closed form term above, you can have the computer verify that it indeed does sum to the right expression before you waste time proving something that's wrong. Note: Turning in computer output does not count as a proof. This is just meant to help you not waste time. Note 2: For example, in Maxima you type <\quote-env> to sum the above expression and it tells you that we're on the right track.) We can prove this now by massaging the formula into the right shape: <\eqnarray*> ||2(k-1)+[2(k-1)+1]>>|||2(k-1)+(n-(k+1)+1)[2(k-1)+1]>>|||2(k-1)+(n-k))[2(k-1)+1]>>|||n(2k-1)-2k+3k-2>>|||k-n-2k+3k-2n>>|||-n-2+3-2n>>|||n+n-5n,>>>> where we've used Gauss's sum formula <\equation*> k= and the fact that <\equation*> k=, which you can either look up or easily prove by induction. Further, there is one square root per diagonal entry of , so that we end up with square roots. a) Consider that we're trying to solve \=\>, or rather, \)=\> and note that just like for LU, the solving can be broken up in two separate subtasks, namely <\eqnarray*> L\>||,>>|L\>||.>>>> We assume that \n>> is a given lower-triangular matrix that satisfies =A> for some matrix for which we're solving =\>. The first solve can be performed as <\equation*> y=b-Ly/L(i=1,\,n), and the second solve works on > and yields <\eqnarray*> >||y-Ly/L(i=n,\,1)>>|||y-Ly/L(i=n,\,1).>>>> Note that in the second solve, the vector needs to be filled in backwards (i.e. from to 1, in the same order as for a regular Gauss backsubstitution). b) Following Problem 3b), it is not hard to see that the operation count for two backsubstitutions including the diagonal is >. (also cf. p. 156) Code: <\with|par-par-sep|0> Data: <\with|par-par-sep|0> <\initial> <\collection>