1. - 1.71. Our proof of the Cauchy-Schwarz inequality, Theorem 1.13, used that when U is a unit vector, $$0 \leq ||V−(U·V)U||^2 = ||V||2 −(U·V)^2$$. Therefore if U is a unit vector and equality holds, then V = (U · V)U. Show that equality occurs in the Cauchy Schwarz inequality for two arbitrary vectors V and W only if one of the vectors is a multiple (perhaps zero) of the other vector. Answer: In the first case, when W = 0, the W is a multiple of V; In the second case, when W is nonzero, then consider the unit vector $U = {||W||}$. Then, by the result in the question, it follows that V = (U · V)U. Therefore: $$V = (U · V)U = ({||W||}· V)·{||W||}= ({||W||^2})·W$$ As $({||W||^2})$ is a constant, so V is a multiple of W. 2. -2.19. Suppose C is an n by n matrix with orthonormal columns. Use Theorem 2.2 to show that $$||CX|| \leq ||X||$$ Use the Pythagorean theorem and the result of Problem 2.17 to show that in fact $$||CX|| = ||X||$$ for such a matrix. Answer: (1) First we calculate C. Let Cj denote the jth column of C. Since C has orthonormal columns, each Cj has norm 1. Then $$||C|| = ^n^n}C_{ij}^2}$$ $$=^n(^n}C_{ij}^2)}$$ $$=^n|| C_j||^2}$$ Now by Theorem 2.2, $$||CX|| \leq ||C||||X|| = ||X||$$ as desired. (2)By Problem 2.17, CX = x₁C₁ + x₂C₂ + … + xnCn. To find the norm of RHS, we need to apply the Pythagorean theorem (we need that C has orthonormal columns) to get $$||x_1 C_1 + x_2 C_2 + \dots + x_n C_n||^2 = ||x_1 C_1||^2 + ||x_2 C_2||^2 + \dots + ||x_n C_n||^2$$ Now we can put the pieces together: $$||CX||^2 = ||x_1 C_1 + x_2 C_2 + \dots + x_n C_n||^2$$ $$=||x_1 C_1||^2 + ||x_2 C_2||^2+\dots+ ||x_n C_n||^n$$ $$=x_1^2||C_1||^2 + x_2^2|| C_2||^2+\dots+ x_n ^2||C_n||^n$$ $$=x_1^2 + x_2^2+\dots+ x_n ^2$$ $$= ||X||^2$$ Since norms are nonnegative, we can conclude that ||CX|| = ||X||. 3. -2.44. Use the Cauchy-Schwarz inequality $$|A · B| \leq||A||||B||$$ to prove: (a) the function f(X)=C · Xis uniformly continuous, (b) the function g(X, Y)=X · Y is continuous. Answer: (a) In case 1, If C = 0 then f(X)=0 for all X,so |f(X)−f(Y)| = 0 < ε for all ε, X, Y. In case 2, where C = (c1, c2, ..., cn)≠(0, 0, ..., 0). By the definition of f and properties of the dot product, |f(X)−f(Y)| = |C · Y − C · Y|=|C · (X − Y)|. By the Cauchy-Schwartz inequality we get $$|f(X)− f(Y)|=|C·(X−Y)|\leq||C||||X−Y||$$ Let ε > 0 and take $δ=\frac {ε}{||C||}$ If $||X−Y||<δ= \frac {ε}{||C||}$ Then we have |f(X)−f(Y)| ≤ ||C||||X − Y|| < ε for all X and Y in Rn. Therefore for any C, f is uniformly continuous. (b) Fix V(A, B) in R2n, to show that g(V)=g(X, Y)=XY is continuous at (a, b) Given ϵ > 0, we need to find δ > o If $||U-V||=\sqrt {(X-A)^2+(Y-B)^2}<\delta$ then ||g(U)−g(V)||=|XY − AB|<ϵ By the triangle inequality, we know that $$|XY-AB|= |[(X-A)+A][(Y-B)+B]-AB|$$ $$=|(X-A)(Y-B)+B(X-A)+A(Y-B)|$$ $$\leq||X-A||+||B|| ||X-A||+||A|| ||Y-B||$$ If $\sqrt {(X-A)^2+(Y-B)^2}<\delta$ Then ||X − A|| + ||B||||X − A|| + ||A||||Y − B|| $$\leq (1+||A||+||B||)\sqrt {(X-A)^2+(Y-B)^2}$$ $$\leq(1+||A||+||B||)\epsilon$$ $$\leq(1+||A||+||B||) \frac {\delta}{1+||A||+||B||}$$ $$=\delta$$ So, given ϵ > 0, set $\delta= min(1,\frac {\delta}{1+||A||+||B||})$ we have $\sqrt {(X-A)^2+(Y-B)^2}<\delta$ Therefore, for any (A, B) in R2n, g(X, Y)=X · Y is continuous. 4. -2.45. In the triangle inequality ||A + B|| ≤ ||A|| + ||B|| put A = X − Y and B = Y. Deduce ||Y|| − ||X|| ≤ ||Y − X||. Show that if two points are within one unit distance of each other, then the difference of their norms is less than or equal to one. Answer: Let A and Bbe in Rn. Apply the triangle inequality we have $$||A + B|| \leq ||A|| + ||B||.$$ LetX = A + B, Y = B,so A = X − Y. so we have $$||X|| \leq ||X − Y|| + ||Y||, ||X|| − ||Y|| \leq ||X − Y||.$$ Exchange the symbol X and Y we get $$ ||Y|| − ||X|| \leq ||Y − X||.$$ So when X and Y are within one unit of distance of each other which means ||Y − X|| ≤ 1, by the inequality we can conclude $$||Y|| − ||X|| \leq ||Y − X||\leq 1$$ 5. -3.9. Suppose a function F from Rn to Rm is differentiable at A. Justify the following statements that prove $$L_AH = DF(A)H,$$ that is, the linear function LA in Definition 3.4 is given by the matrix of partial derivativesDF(A). (a) There is a matrixC such that LA(H)=CH for all H. (b) Denote by Ci the i − th row of C. The fraction $${||H||} = {||H||}$$ tends to zero as ||H|| tends to zero if and only if each component $$\frac {f_i(A+H)− f_i(A)−C_iH}{ ||H||}$$ tends to zero as ||H|| tends to zero. (c) Set H = hej in the i − thcomponent of the numerator to show that the partial derivative fi, xj(A) exists and is equal to the (i, j) entry of C. Answer: (a) F from Rn to Rm is differentiable at A. By the definition of differentiability there is a linear function LA(H) such that $${||H||}$$ tends to 0 as ||H|| tends to zero. By Theorem 2.1, every linear function from Rn to Rm can be written as LA(H)=CH for all H, where C is some m × n matrix. (b) The absolute value of each component of a vector is less than or equal to the norm of the vector so for each H and i we have $$0 \leq| f_i(A + H) − f_i(A) − C_i · H| \leq ||F(A + H) − F(A) − CH||$$ where Ci is the i − th row of C. Since $${||H||}$$ tends to 0, by the squeeze theorem, both${||H|| }$ and${||H|| }$ (c) Let H = hej = (0, 0, ..., 1, 0, ..., 0), the 1 in thej − th place. By part (b), tend to zero as ||H|| tends to zero. $$ {|h|} =0$$ Therefore $$ {h} =0$$ Since Ci · ej = cij and by the definition of partial derivatives,we have $$ {h} = \frac {\partial f_i}{\partial x_j} (A)$$ So we can conclude that cij = c(A) 6. 6.14. Justify the following items which prove: If f is continuous on R₂ and ∫RfdA = 0 for all smoothly bounded sets R, then is identically zero. (a) If f(a, b)=p > 0 then there is a disc D of radius r > 0 centered at (a, b)in which $f(x,y)> \frac {1}{2}p$ (b) If f is continuous and f(x, y)≥p₁ > 0 on a disk R then ∫RfdA ≥ p₁(Area(R)). ∫RfdA = 0for all smoothly bounded regions R, then f cannot be positive at any point. (d) f is not negative at any point either. (e) f = 0 at all points. Answer: First, we can assume that f is continuous on R2 and ∫RfdA = 0 for all smoothly bounded sets R (a) Because f is continuous at (a, b), by the definition of continuity, there is r > 0 such that for all (x, y) such that||(x, y)−(a, b)||<r, we have |f(x, y)−f(a, b)| < p/2.Then we assume p > 0, so p − p/2 < f(x, y)<p + p/2 In particular, f(x, y)>p/2 (b) As R is bounded, the closure of R is closed and bounded. So we can apply the extreme value theorem which means f is bounded on the closure of R. In particular, f is bounded on R. f is also integrable on R; in fact ∫RfdA = 0. Apply the lower bound property, ∫RfdA ≥ p₁(Area(R)) holds. (c) Suppose f is positive at (a, b). From (a), there is a disc Rof nonzero radius on which f(x, y)>f(a, b)/2 > 0. From (b), ∫RfdA ≥ (f(a, b)/2)·area(R)>0 But we assumed that ∫RfdA = 0 for all smoothly bounded sets R, it comes to a contradiction. Therefore f cannot be positive at any point. (d) As we know that −f is continuous, and that for all smoothly bounded regions R, by linearity, we have −fdA = −fdA = −0 = 0 . From(c),we know that −f cannot be positive at any point. Thus, we conclude that f cannot be negative at any point. (e) Therefore, for any (a, b), f(a, b) is defined and is neither positive nor negative, so it must be 0. 7. -6.44. Justify the following steps to prove that if f is integrable on R₂ and g is a continuous function with 0 ≤ g ≤ f then g is integrable on R₂. (a) ∫D(n)gdA exsits (b) 0 ≤ ∫D(n)gdA ≤ ∫D(n)fdA (c) The numbers ∫D(n)gdA are an increasing sequence bounded above. (d) limn → ∞∫D(n)gdA exsits Answer: Check D : D = R² unbounded, g 0, continuous, so we need to prove limn → ∞∫D(n)gdA exsits. (a) g ≥ 0 is continuous on R₂ and D(n) is bounded for each n so g is integrable over D(n) (b) By theorem 6.9 Larea(D)≤I(f, D) and the fact 0g, we know that 0area(D)≤∫D(n)gdA so if 0 ≤ f(x, y)−g(x, y) then $$ 0=0 area(D)\leq f(x,y)-g(x,y) dA \leq f dA - g dA $$ Therefore, 0 ≤ ∫D(n)gdA ≤ ∫D(n)fdA (c) Let Cn = ∫D(n)gdA. Because g ≥ 0, D(n)≤D(n + 1). Then C₁, C₂, C₃...Cn is an increasing sequence. Since 0 ≤ ∫D(n)gdA ≤ ∫D(n)fdA and $$ f dA = f dA$$ exists, We got $$ g dA \leq f dA$$ (e) By the Monotone Convergence Theorem for sequences, ∫D(n)gdA increasing and bounded above is convergent so limn → ∞∫D(n)gdA = limn → ∞Cn exists 8. 6.50. Justify steps (a)–(d) to prove that if a continuous function f is integrable on an unbounded set D then |∫DfdA| ≤ ∫D|f|dA (a)∫DfdA = ∫Df+dA − ∫Df−dA ≤ ∫Df+dA + ∫Df−dA = ∫D|f|dA (b)∫D(−f)dA ≤ ∫D|f|dA (c)−∫DfdA ≤ ∫D|f|dA (d)|∫DfdA| ≤ ∫D|f|dA (a) By Definition 6.9, if f is continuous and integrable on an unbounded set D, then |f| is integrable on D. Rewrite f(x, y)=f+(x, y)−f−(x, y) where f+(x, y)=f(x, y) if f(x, y)≥0 and 0 otherwise, and f−(x, y)= − f(x, y)if f(x, y)≤0 and 0 otherwise. So, by the definition of ∫DfdA, $$\int_ {D} f dA=\int_ {D} f_+ dA - \int_ {D} f_- dA $$ Since ∫Df−dA is nonnegtive $$\int_ {D} f_+ dA - \int_ {D} f_- dA \leq \int_ {D} f_+ dA + \int_ {D} f_- dA$$ Since f+ ≥ 0 and f− ≥ 0 are integrable over D $$\int_ {D(n)} f_+ dA + \int_ {D(n)} f_- dA =\int_ {D(n)} (f_+ + f_-) dA$$ By the properties of limits of increasing sequence D(n), we know ∫D(n)(f+ + f−)dA converges so $$\int_ {D} f_+ dA + \int_ {D} f_- dA =\int_ {D} (f_+ + f_-) dA$$ By the equation f(x, y)=f+(x, y)−f−(x, y), we got $$\int_ {D} f dA \leq \int_ {D} \left|f \right|dA$$ (b) In the same way, we apply (a) to the functions − f to get $$\int_ {D} -f dA \leq \int_ {D} \left|-f \right|dA= \int_ {D} \left|f \right|dA$$ (c)By the properties of limits and the equation ∫D(n) − fdA=_ D(n) f dA ,weget$$- \int_ {D} f dA \leq \int_ {D} \left|f \right|dA$$(d)Ifbaand −b ≤ a then|b|≤a. From (a), we got $$\int_ {D} f dA \leq \int_ {D} \left|f \right|dA$$, From (b) and (c), we got $$- \int_ {D} f dA \leq \int_ {D} \left|f \right|dA$$ Therefore, we can conclude that $$\left| \int_ {D} f dA\right| \leq \int_ {D} \left|f \right|dA$$ 9. -4.21. Find the point on the plane $$z = x − 2y + 3$$ that is closest to the origin, by finding where the square of the distance between (0, 0) and a point (x, y) of the plane is at a minimum. Use the matrix of second partial derivatives to show that the point is a local minimum. Let $$ D=d^2 = f(x,y)= x^2+y^2+(x-2y+3)^2 $$, to find the local extrema we let $$\triangledown f = (4x−4y+6,−4x+10y−12)=0$$ at ( − 0.5, 1). so $$ H(-0.5,1)= \left[ { c c } 4 & -4 \\ -4 & 10 \right] $$ Because 4 > 0 and (4)(10) − (−4)2 = 24 > 0. So by the Theorem 4.3, it is positive definite. By theorem 4.8, If ▿f(A)=0 and the Hessian matrix [fxixj(A]) is positive definite at A, then f(A) is a local minimum. Therefore, f has a local minimum at point ( − 0.5, 1) 10. -7.32. Let S be the unit sphere centered at the origin in R³. Evaluate the following items, using as little calculation as possible (a)∫S1dσ (b)∫S||X||²dσ (c) Verify that ∫Sx₁²dσ = ∫Sx₂²dσ = ∫Sx₃²dσ using either a symmetric argument or parametrizations. Can you do this without evaluating them? (d) Use the result of parts (b) and (c) to deduce the value of ∫Sx₁²dσ Answer: (a) In geometry, ∫S1dσ means the area of the unit sphere in R³ So ∫S1dσ = π · 1³ = 4π (b) For all X S we have ||X||² = 1, therefore ∫S||X||²dσ = ∫S1dσ = 4π (c) Rotation by /2 about the x₃-axis corresponds to some transformation on the domain of the parametrization of S. We know that x₁ comes to the same position as x₂, Therefore ∫Sx₁²dσ = ∫Sx₂²dσ In the same way, make a rotation by π/2 about the x₂ , we got ∫Sx₁²dσ = ∫Sx₃²dσ Therefore, ∫Sx₁²dσ = ∫Sx₂²dσ = ∫Sx₃²dσ (d) By the definition of norm ||X||, we know that ||X|| = x₁² + x₂² + x₃² So, $$ ||X||^2 d\sigma= x_1^2 +x_2^2 +x_3^2 d\sigma= 3 x_1^2 d\sigma = 4\pi$$ Therefore,$$ x_1^2 d\sigma ={3} ||X||^2 d\sigma = {3}$$