Elementary algebra 9th edition pdf download
In this review you will find 5 tips from Susanna Florie from her…. The answer for me is simple: I like to solve problems.
Engineering is a popular field for many reasons. Perhaps this is because almost everything around us is created by engineers in one way or another, and there are always new, emerging and exciting technologies impacting…. Not sure how best to study math?
Are you perhaps someone who starts studying the day before the exam? Then you know yourself that your situation is not the most ideal. Unfortunately, there is no magic bullet to make you a maths crack or pass your exam in no time.
It is important to know that mathematics always builds on…. Elementary Algebra 9th Edition quantity. SKU: aec Category: Algebra Tags: , , , Description Additional information Specifications book-author Charles P.
Additional information book-author Charles P. Site Map About Contact. Shopping Cart. We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits.
However, you may visit "Cookie Settings" to provide a controlled consent. Cookie Settings Accept All. R'" defined by L x Ax. I a Find the dimension ofker L. Let L : Rj b Find a basis for range L. R3 be the linear tr. Let L : Atn Let L: 1 3 --,I- 1 3 a Prove that L is invertible.
S be a linear transfonnation. Find the dimension of the solution space for the follow mg homogeneous system: [j 2 1 - I o 0 L is one-to-one. Let L be the linear transformation defined in Exercise Section 6. Prove or disprove the following: b L is onto. Let L : Prove or di spro ve the following: be the Imeat transformation defined by Prove Corollary 6.
Let L : RIO Theorem 6. I Moreover. A is the onl y matri x with thi s property. Proof We show how to constmct the matri x A. Then L v i is a vector in W, and since T is an ordered basis for IV, we can express this vector as a linear combination of the vectors in T in a unique manner. Let x be any vector in V. Hence the matrix A is unique. S tep 2. Find the coordi nate vector [L v j ]T of L v j with res pect to T.
Thi s means that we have to express L v as a linear combination of the vectors in T [see Equation 2 1, and this req uires the solution of a linear system.
Figure 6. We can thus wOl k with matl ices l at her thall with li near tfallsfollnat io lls. However, we can find L p t» by using the matrix A as fo llows: Since 6. We thcn compute L v , using [L v ]T and T. Notice that if we change the order of the vectors in SorT, the matrix may change. The matrix A is called the representation of L with respect to the ordered bases Sand T. We also say that A represents L with respect to Sand T.
Physicists and others who deal at great length with linear transfonnations perform most of their computations with the matrix representations o f the li near transformatio ns. Of course. The relationship between linear transformations and matri ces is a much stronger one than mere computational cOIlvenience. Then the rep resentation of L with respect to Sand T is , '] , 3. Then Similarly, To determine the coordinates of the images o f the S-basis.
ThI s can be done slInultaneously. That is. The matrix A consists of the last n columns of thi s last matrix. Also, we can show readily that the matrix of the identity operator see Exerci se 22 in Section 6. It can be shown Exercise 23 that the matrix of the identity operator with respect to Sand T is the transition matrix from the S-basis to the T -basis see Section 4. This fact , which can be proved directly at this point, follows almost tri vially in Section 6.
Moreover, the problem of finding range L reduces to the problem of finding the column space of A. L is in ve rt ible. L is o ne- to-o ne. Key Terms Matrix represen ting a li near transfonnation Ordered basis Invariant subspace '0" Exercises l. Let S be the natural basis for R2 and let 3. RJ be defi ned by Le t 5 and T be the natural bases for lively.
Let o 0 lJ. Find the representation of L wi th respect to a 5 and T: b 5' and 1". R2 be the linear transformation rotating R 2 counterclockwise throui! Let L: R 3 --;. R 3 be defined by 9. Find the representation of L with respect to S. Find the representation of L with respect a Find the representation of L with respcetto the natural basis S for R 3. Ibl F;"d L U] b 5' and T'. Find L -3 by using the definition of L and the matrices obtained in parts a and b. R 3 be defined as in Exereise 5.
L e3» be an ordered basis for R 3. Let L: R 3. Neither is V if any of the eigenvalues of AT A arc repeated. It can be shown that thc preceding construction gives matrices U. We illustrate the process in Example 6. Fint, we compute A T A, and then compute its e igenvalues and eigenvectors. Explain why. Recording the results to six decimals.
For example, previously in Section 4. An implicit assumption in this statement is that all the computational steps in the row operations would use exact arithmetic. Unfortunately, in most computing environments, when we perform row operations, exact arithmetic is not used. Rather, floating point arithmelic, which is a model of exaci arilhmetic , is used.
In some cases this loss of accuracy is e nough 10 introduce doubt into the computation of rank. The following two results, which we slate without proof.
Theorem 8. Multiple singular values are counted according to their multiplicity. Because matrices U and V of a singular value decomposition are onhogonal, it can be shown that most of the errors due to the use of noating point arithmetic occur in the computation of the singular values. The size of the matrix A and characteristics of the fl oating point arithmetic are often used to detennine a threshold value below which singu lar values are considered zero.
It has been argued that singular values and si ngular value decomposition give us a computationally reliable way to compute the rank of A. In addition to determining rank, the singu lar value decomposition of a matrix provides orthononnal bases for the fundamental subspaces associated with a linear system of equations.
We state without proof the following: Theorem 8. Then the following statements arc true: a The first r columns of U arc an orthonormal basis for the column space of A. For examples of these applications. MAA Notes Number [9. The Mathematical Association of America. Hill, D. New York: Random House, Cpper Saddle River.
NJ: Prentice Hall. Kahaner, D. Numerical Methods alld Software. NJ: Prentice Hall, Stewart, G. Introductioll to Matrix COlllplllations. New York: Academic Press. Then A has a unique dominant eigenvalue.
Furthermore, suppose that A is diagonalizablc with associated linearly independent eigenvectors XI. Let and compute the seque nce of vectors Ax, A 2 x. Hence PI and pz are trajectories.
It follows that the eigenvectors PI and P2 determi ne lines or rays through the origi n in the phase plane, and a phase portrait fo r this case has the general form shown in Figure 8. To complete the portrait, we need more than the s 'Ccial trajectories corresponding to the eigen directions. These other trajectories depend on the values of AI and A2.
Hence all the trajectories tend 10 go away from the equilibrium point at the origin. The phase portrait for such dynamical systems is like that in Fi gure 8. In this case 0. A q uad r atic eq uation in the variables x and y has the form ax! The graph of Equation I is a conic section. In Figure 8. Degenerate cases of the conic sections are a point. The nondegenerate conics are said to be in standard position if their graphs and equat ions arc as give n in Figure 8.
The equation is said to be in stand a rd form. Thus the x-intercepts arc 5. The x-intercepts are 3. Thus the graph of this equation consists of all the points on the x-axis. First, notice that the equations of the conic sections whose graphs are in standard position do not contain an xy-tcrm called a cross-product term.
If a cross-product term appears in the equation, the graph is a conic section that has been rotated from its standard position [sec Figure 8. Also, notice that none of the equations in Figure 8. If either of these cases occurs and there is no xy-term in the equation. On the other hand. If a cross-product term is present in the given equation.
If an x '-term is not present in the given equation, but an xl-tenn and an x-term. Thus, i f an xy-term appears in a given equation. Solution Since there is no cross-product term, we need o nl y translate axes. Completing the squares in the x- and y-terms. As discussed in Section 8. As noted in Section 7. Write the equation in standard form.
Associated eigenvectors arc obtained by solving the homogeneous system. Equation 10 i; the standard form of the equation o f the elli pse.
The graph o f a given quadratic equation in x and y can be identified from the equation that is obtained alier rotati ng axes, that is, from Equation 6 or 7. The identification o f the conic section given by these eq uations is shown in Table 8. I' 10 idelllif ' Ihe graph of each eqllalion and wrile each equalion ill slalldard form I' 25 lhlVugh Quadric surfaces are often studied and sketched in analytic geometry and calcu lus. Here we use Theorems 8.
A second-degree polynomial equation in three variables x, y. As in Section 8. As in the case of the classificatio n of conic sect ions in Section 8. Using the idcas in Section B. If A is not diagonal, then a rotation of axes is used to eliminate any crossproduct terms xy, X Z , or y z.
The res ulting equation will have the standard form or in matrix form. We now turn to the classification of quadric surfaces. The inertia of A , denoted In A , is an ordered triple of numbers pos, neg, zcr , where pos, neg, and zer are the number of positive.. The largest positive e igenvalue is denoted by AI and the smallest one by Apos.
Wc also assumc that A1 ;.. That is, the surface represented has no points. The assumptions AI :. Then there arc only three possiblccases for the inertia of A. I ; then the quadratic form represents a parabola. This classification is identical to that given in Table 8. For example. Before classifying quadric surfaces, using inertia, we present the quadric surfaces in the standard fonns mct in analytic geomctry and calculus. Ellipsoid See f7i gurc 8.
E lliptic P a r a boloid Sec Fi gure 8. A degenerate case of a parabola is a line, so a degenerate case of an elliptic paraboloid is an elliptic cylinder see Figure 8. Turn off rational display. Quit reduce! QO Enter number of row that changes. I-A 1 ,2 1 Enter first row number. Your final matrix is: A 1.
Once you have used reduce on a number of linear systems, the reduction process becomes a fairly systematic computation. An alternative is routine rowop, which has the same fu nctionality as reduce, but employs the graphics screen and uses MATLAS'S graphical user interface. The command help rowop displays the following description: ROWOP Perform row reduction on real matrix A by explicitly choosin2 row operations to use.
A row operation can be "undone", but this feature cannot be used in succession. Matrices can be at most 6 by 6. To enter information, click in the gray boxes with your mouse and then type in the desired numerical value followed by ENTER. If A is not square. The input vectors u and v are displayed i n a three-dimensional perspective along with their cross product. For visualization purposes a set of coordinate 3-D axes are shown.
In case of an error the solution returned is all zeros. The orthonormal basis appears in the columns of y unless there is a second argument. The second argument can have any value. If A is singular, a warning is given.
Perform LU-factorization on matrix A by explicitly choosing row operations to use. No row interchanges are permitted. A row operation can be "undone. This routine is for small matrices. They will be displayed graphically along with their sum , difference, and a scalar multiple. I' l'eclioll, ,Olll'hOlildfir. Write a description of the behavior of this matrix seq uence. Are they equal?
Show that B is symmetric and C is skew symmetric. Use rcduc ' to find all solutions to the linear system in Exercise 9 a in Section 2. In place oflhe command reduce. Do the row oper. Use rcduc ' to find the reduced row echelon fonn of matrix A in Exercise ML. Use rcduc ' to find the reduced row echelon fonn of matrix A in Exerc ise ML.
Use rcduc ' to find all solutions to the linear system in Exercise 6 a in Section 2. Use r 'duc ' to find all sol! Use f 'duc ' to find all solutions to the linear system in Exercise 8 b in Section. The backslash command. For more details on the command. Hil l. Experimems in ComplIwtiollal Malrix Algebra. New York: Random House. Use lupr in M Use commnnd rref IA eye size A». Use COlllrnnnd rn! USing 7 MLJ. Sol,'c thc lincar systcm in E amp lc 2 in Section 2.
Check your LV -factorization. Solve Exercises 7 and g in Section 2. Use the TOutine red uce to perronn row operations. USing mmrices are nonsingular. Use command rref. Use command r ref. Use routine red uce to perform row operations. JuSt type d et A. Use th e cofactor rout ine to evaluate the determinant o f A. Use d el see Exe rcise ML. IIII' Wlilille vcc2dcmo. Deleml ine a posi tive intege r I so that d et t. For directio ns on using this routine.
Use cofllctor 10 checl; Yol. Use th e cofa ctor routine see Exerci se ML. Use th e coracto r roul ine to evaluate the determinant o f A , usi ng Theorem 3. For directio n; o n using this rout ine. Since p is true,. We look at the most important logical connectives. Let p and q be statements. The statement "p and q"' is denoted by p A q and is called the conj unction of p and q. The statement "p or q" is denoted by p v q and is called the disjunction of p and q.
The statement p v q is true o nl y when either p or q or both are true. The tmth ta ble giving the truth values of p v q is given in Table C. Form the disjunction of the statements p: - 2 is a negative integer and q: ratio nal number. Of course, exactly one of the two possibilities occurred; both could not have occurred. Thus thc connective 0,. In mathematics and computer science, we always use the connective or In the inclusive sense.
Two statements are equivalent if they have the same truth values. This means that in the course of a proof or computation , we can always replace a given statement by an equivalent statement. Equivalent statements are used heavily in constructing proofs. The statement p is called the antecedent or hypothesis, and the statement q is called the consequent or conclusion. The connective if.
A conditional statement can appear disguiscd in various forms. The truth table giving its truth values is shown in Table C. S, which we observe is exactly thc same as Table C. The trulh table giving its truth values is shown in Table C. Solution a Contrapositive: If two different lines intersect, then they are not parallel. The given implication and the contrapositive arc true. Converse: If two differe nt lines do not intersect.
In this case. The given implication and the contrapositive are true. Converse: If ab is positive, then a and b are positive. The truth values of p q are given in Table C.
Observe that p q is true only when both p and q are true or when both are L11se. The biconditional p q can also be stated as p is necessary and sullicient for q. We soon turn to a brief introduction to techniques o f proof. First, we present in Table e. S a number of equivalences that are useful in this regard. Some o f these are useful in techniques of proof. The construction of Ih is logical argument may be quite elusive: the logical argument itself is what we call the proof.
Each step in the "connection" must be justified or have a reason fo r its validity. Thus we connect p and q by logically building blocks of known or accepted facts. Often, it is not clear what building blocks facts 10 use and exactly how to get started on a fruitful path.
Unfortunately, we have no explicit guideli nes in this area, other than to recommend a careful reading of the hypothesis p and conclusion q in order 10 elearly understand them. Only in this way can we begin 10 seek relationships connections between them. At any stage in a proof, we can replace a statement that needs to be derived by an equivalent statement.
The construction of a proof requires the building of a step-by-step connection a logical bridge between p and q. If we let bl. Thi s approach is known as a di rect proof. We illustrate this in Example I. This is juSt q. In summary. We call this forward building. Such a logical bridge is called backward building. The two techniques can be combined ; build fo rward a few steps, build backwards a few steps. In practice. T he choice of intermediate steps and the methods for de ri ving them are creative activities that cannot be preci sely described.
Such a procedure is called an indirect method of proof. When the proof of the contrapositive is done di rectly, we calt this proof by contrapositive. Unfortunately, there is no way to predict in advance that an indirect method of proof by contrapositive may be success ful. Sometimes, the appearance of the word not in the conclusion q is a mggeslioll to try this method. There are no guarantees that it will work. We illustrate the use of proof by contrapositive in Example 2.
Prove that if 11 2 is odd, then II is odd. Hence the given statement has been establi shed by the method of proof by contrapositive. We can see why this method works by referring to Table C. The method of proof by contradiction starts with the assumption that p is true. We would like to show that q is also true. When this is done, we say that we have reached a contradiction, so our additional hypothesis that q is L1lse lllUst be illcorn:ct. But also by Theorem 3. Hence, their cross-product is zero.
Hence, it must lie on the line through the origin perpendicular to v and in the plane determined by u and v. Applying Part d of Theorem 3. Since these vectors are not multiples of one another, the planes are not parallel.
Since one vector is three times the other, the planes are parallel. Since the inner product of these two vectors is not zero, the planes are not perpendicular. Alternatively, recall that a direction vector for the line is just the cross-product of the normal vectors for the two planes, i. Since the plane is perpendicular to a line with direction 2, 3, —5 , we can use that vector as a normal to the plane.
Call the points A, B, C, and D, respectively. Since they have points in common, they must coincide. A normal n to a plane which is perpendicular to both of the given planes must be perpendicular to both n1 and n2. These, together with the given point and the methods of Example 2, will yield an equation for the desired plane.
We change the parameter in the equations for the second line from t to s. Hence, the two lines coincide. They both pass through the point r0 and both are parallel to v. This represents a line through the point P1 with direction r2 — r1. Hence the given equation represents this line segment.
Thus the system is inconsistent, so the lines are skew. Exercise Set 4. The transformation is not linear because of the terms 2x1x2 and 3x1x2. In matrix terms, a dilation or contraction is represented by a scalar multiple of the identity matrix. Since such a matrix commutes with any square matrix of the appropriate size, the transformations commute. Compute the trace of the matrix given in Formula 17 and use the fact that a, b, c is a unit vector. Thus 3, 1 , for example, is not in the range.
Thus S is not linear. Thus, the Lagrange expression must be algebraically equivalent to the Vandermonde form. This is done by adding a next term to p i , pe. This is a vector space. We shall check only four of the axioms because the others follow easily from various properties of the real numbers. The details are easily checked. Let k and m be scalars. Axiom 4: There is no zero vector in this set.
Thus, there is no one zero vector that will work for every vector a, b, c in R3. Since we are using standard matrix addition and scalar multiplication, the majority of axioms hold. However, the following axioms fail for this set V: Axiom 1: Clearly if A is invertible, then so is —A.
Thus, V is not a vector space. Since we are using the standard operations of addition and scalar multiplication, Axioms 2, 3, 5, 7, 8, 9, 10 will hold automatically. However, for Axiom 4 to hold, we need the zero vector 0, 0 to be in V. Thus, the set of all points in R2 lying on a line is a vector space exactly in the case when the line passes through the origin.
Exercise Set 5. However, for Axiom 4 to hold, we need the zero vector 0, 0, 0 to be in V. Thus, the set of all points in R3 lying on a plane is a vector space exactly in the case when the plane passes through the origin. Planes which do not pass through the origin do not contain the zero vector. Since this space has only one element, it would have to be the zero vector.
In fact, this is just the zero vector space. Suppose that u has two negatives, —u 1 and —u 2. We have proved that it has at most one. Thus it is not a subspace. Therefore, it is a subspace of R3. The same is true of a constant multiple of such a polynomial. Hence, this set is a subspace of P3. Hence, the subset is closed under vector addition. Thus, the subset is not closed under scalar multiplication and is therefore not a subspace. Thus the set is a subspace.
Thus 2, 2, 2 is a linear combination of u and v. Thus, the system of equations is inconsistent and therefore 0, 4, 5 is not a linear combination of u and v. Since the determinant of the system is nonzero, the system of equations must have a solution for any values of x, y, and z, whatsoever.
Therefore, v1, v2, and v3 do indeed span R3. Note that we can also show that the system of equations has a solution by solving for a, b, and c explicitly. Since this is not the case for all values of x, y, and z, the given vectors do not span R3. Hence the given polynomials do not span P2. The set of solution vectors of such a system does not contain the zero vector. Hence it cannot be a subspace of Rn. Alternatively, we could show that it is not closed under scalar multiplication.
Let u and v be vectors in W. Let W1 and W2 be subspaces of V. This follows from the closure of both W1 and W2 under vector addition and scalar multiplication. They cannot all lie in the same plane. Hence, the four vectors are linearly independent. This implies that k3 and hence k2 must also equal zero. Thus the three vectors are linearly independent. Thus they do not lie in the same plane.
Suppose that S has a linearly dependent subset T. Denote its vectors by w1,…, wm. Since not all of the constants are zero, it follows that S is not a linearly independent set of vectors, contrary to the hypothesis. That is, if S is a linearly independent set, then so is every non-empty subset T.
This is similar to Problem Use Theorem 5. The set has the correct number of vectors. Thus the desired coordinate vector is 3, —2, 1. For instance, 1, —1, —1 and 0, 5, 2 are a basis because they satisfy the plane equation and neither is a multiple of the other. For instance, 2, —1, 4 will work, as will any nonzero multiple of this vector. Hence it is a basis for P2. There is. Thus, its solution space should have dimension n — 1. Since AT is also invertible, it is row equivalent to In.
It is clear that the column vectors of In are linearly independent. Hence, by virtue of Theorem 5. Therefore the rows of A form a set of n linearly independent vectors in Rn, and consequently form a basis for Rn.
Any invertible matrix will satisfy this condition. The nullspace of D is the entire xy-plane. Use Theorems 5.
However A must be the zero matrix, so the system gives no information at all about its solution. That is, the row and column spaces of A have dimension 2, so neither space can be a line. Rank A can never be 1.
Thus, by Theorem 5. Hence, Theorem 5. Verify that these polynomials form a basis for P 1. Exercise Set 6. To prove Part a of Theorem 6. To prove Part d , observe that, by Theorem 5. By inspection, a normal vector to the plane is 1, —2, —3.
From the reduced form, we see that the nullspace consists of all vectors of the form 16, 19, 1 t, so that the vector 16, 19, 1 is a basis for this space. Conversely, if a vector w of V is orthogonal to each basis vector of W, then, by Problem 20, it is orthogonal to every vector in W. In fact V is a subspace of W c True. The two spaces are orthogonal complements and the only vector orthogonal to itself is the zero vector.
For instance, if A is invertible, then both its row space and its column space are all of Rn. See Exercise 3, Parts b and c. The set is therefore orthogonal. It will be an orthonormal basis provided that the three vectors are linearly independent, which is guaranteed by Theorem 6. Note that u1 and u2 are orthonormal. Thus we apply Theorem 6.
By Theorem 6. But v1 is a multiple of u1 while v2 is a linear combination of u1 and u2. This is similar to Exercise 29 except that the lower limit of integration is changed from —1 to 0. Then if u is any vector in V, we know from Theorem 6. Moreover, this decomposition of u is unique. Theorem 6. If the vectors vi form an orthogonal set, not necessarily orthonormal, then we must normalize them to obtain Part b of the theorem. However, although they are orthogonal with respect to the Euclidean inner product, they are not orthonormal.
However, they are neither orthogonal nor of unit length with respect to the Euclidean inner product. Suppose that v1, v2, …, vn is an orthonormal set of vectors. Thus, the orthonormal set of vectors cannot be linearly dependent. The zero vector space has no basis 0. This vector cannot be linearly independent. If A is a necessarily square matrix with a nonzero determinant, then A has linearly independent column vectors.
Thus, by Theorem 6. Hence the error vector is orthogonal to the column space of A. Therefore Ax — b is orthogonal to the column space of A. Since the row vectors and the column vectors of the given matrix are orthogonal, the matrix will be orthogonal provided these vectors have norm 1. Note that A is orthogonal if and only if AT is orthogonal.
Since the rows of AT are the columns of A, we need only apply the equivalence of Parts a and b to AT to obtain the equivalence of Parts a and c. If A is the standard matrix associated with a rigid transformation, then Theorem 6. But if A is orthogonal, then Theorem 6. Exercise Set 7. By Theorem 7. Thus by Theorem 7.
Since A has no real eigenvalues, there are no lines which are invariant under A. Let aij denote the ijth entry of A.
0コメント