CATEGORII DOCUMENTE  


Bulgara  Ceha slovaca  Croata  Engleza  Estona  Finlandeza  Franceza 
Germana  Italiana  Letona  Lituaniana  Maghiara  Olandeza  Poloneza 
Sarba  Slovena  Spaniola  Suedeza  Turca  Ucraineana 
Together with the fundamental notion of vector space, the linear transformation is another basic concept of linear algebra. This represents the information „carrier” regarding the linearity property from one vector space to another.
The large scale use of this notion in geometry requires therefore a detailed treatment of the subject.
§1. Definition and General Properties
Let V and W be two vector spaces over the commutative corp K.
1.1 Definition. 
A function T V W, with the following properties: ) T (x+y) = T (x) +T (y) , x, y I V 2) T ax = a T (x) , x I V , a I K is called a linear transformation (linear application, linear operator or vector space morphism). 
For the mapping T(x) of a linear transformation T the writing Tx is also used sometimes
1.2 Corollary. 
The application T V W is a linear transformation if and only if 
T ax+by a T(x) + b T (y) , x, y I V, a b I K
The proof can be written directly, and condition (1.1) shows that an application T V W is a linear transformation, if the image of a linear combination of vectors is the linear combination of the images of these vectors.
1° The application T: R^{n} R^{m} , T (x) = AX, A I M _{m}_{ n}(R) , X = ^{t}x is a linear transformation. In the particular case n = m = 1 , the application defined by T (x) = ax, a I R, is linear.
2° If U V is a vector subspace, then the application T: U V, defined by T(x) = x, is a linear transformation, named inclusion application. In general, the restriction of a linear transformation to a subset S V is not a linear transformation. The linearity is inherited only by vector subspaces.
3° The application T: C^{1 }(a, b) C^{0}(a,b) , T (f) = f is linear.
4°
The application T: C^{0 }(a, b) R, T f) = _{
}
5° If T V W is a bijective linear transformation, then T ^{ } W V is a linear transformation.
The set of linear transformations from the V vector space to the W vector space is denoted with L(V, W).
If we define on the set L(V, W) these operations:
(T_{1} + T _{2}) (x) = : T _{1}(x) + T _{2}(x), x I V
aT) (x) = : a T (x) , x I V, a I K
then L(V, W) aquires a Kvector space structure.
A vector space isomorphism is a bijective linear transformation T: V W.
If W = V, then the linear application T: V V will be named the endomorphism of the vector space V, and it’s set will be denoted by End(V). For T_{1}, T_{2} I End(V), being two endomorphisms of V , we can write the following line:
T _{1} _{ } T _{ })_{(x)} = : T_{1}(T _{ }(x)) , x I V
The upper operation will be named the product of the transformations T_{1} si T_{2}, shortly T _{1}T _{2}.
A bijective endomorphism T V V will be named automorphism of the V vector space, and it’s set will be denoted by Aut(V).
The automorphism subsets of a vector space are stable parts with regard to the endomorphism product operation, and they form a subgroup GL(V) End(V), also called linear group of the vector space V.
If W = K, then the linear application T: V K, is named , and the set V* = L( V, K), of all linear forms of V, is a Kvector space named the vector space V `s dual.
If V is an euclidian vector space, of finite dimensions, then it’s dual or V* has the same dimension and its identified with V.
1.3 Theorem. 
If T V W is some linear transformation, then : a) T (0_{v}) = 0_{w} , T (x) =  T(x), x I V b) The image T (U) W, of a vector subspace U V, is also a vector subspace. c) The inverted image T^{1 }(W V , of a vector subspace W W , is also a vector subspace. d) If the vectors x_{1}, x_{2 },, x_{n} I V are linearly dependent then the vectors T (x_{1}), T (x_{2})_{ },, T (x_{n}) I W are also linearly dependent. 
Proof. a) By giving a the value a = 0 , and a in the T (ax a T (x) equation, we obtain T (0_{v}) = 0_{w} and T (x) =  T (x).
Therefore we will ignore the two vector’s 0 indexes.
b) For u, v I T (U) , x, y I U such that u = T (x) and v = T (y). According to the hypothesis that U V is a vector subspace, for x, y I U, a b I K we have ax by I U and with relation (1.1), we obtain
au bv a T (x) + b T (y) = T(ax + by I T (U).
c) If x, y I T ^{1}(W ) then T (x) , T (y) I W and for a b I K we have a T (x) + b T(y) = T (ax by I W (W vector subspace), we get a x b y I T ^{1}(W ) as a result.
d) We apply the transformation T onto the relation of linear dependence l_{ }x_{ } l_{ }x_{ } l_{n}x_{n} = 0 and by using a), the linear dependency relation will be l_{ } T (x_{1}) + l_{ } T (x_{2}) + + l_{n} T (x_{n}) = 0.
1.4 Consequence 
If T V W is a linear transformation, then : a) The set Ker T = T ^{1} = V is called the kernel of the linear transformation T , and it is a vector subspace. b) The image of the linear transformation T, Im T = T (V) W, is a vector subspace. c) If T(x_{1}) , T(x_{2})_{ }, , T(x_{n}) I W are linearly independent then the vectors x_{1}, x_{2 },, x_{n} I V are also linearly independent. 
1.5 Theorem. 
A linear transformation T V W is injective if and only if Ker T = . 
Proof. The injectivity of the linear transformation T combined with the general property T_{ }(0) = 0 implies Ker T = .
Reciproc. If Ker T = and T(x) = T(y), using the linearity property we obtain T (x  y) = 0 x – y I Ker T , that is x = y , so T is injective.
The nullity of the operator T is the dimension of the kernel Ker T.
The rank of the operator T is the dimensioning of the image Im T.
1.6 Theorem. 
(rank theorem) If the vector space V is finitely dimensional then the vector space ImT is also finitely dimensional, and we have the relation : dim Ker T+ dim Im T = dim V 
Proof. Lets note with n = dim V and s = dim Ker T For s 1 let us consider a base in Ker T and complete it with B = so that B is a base of the entire vector space V. The vectors e_{s+1} e_{s+2} e_{n} represent a base of the supplementary subspace of the subspace Ker T
For
any y
I Im T, there is x = _{
}
Due to the fact that T (e_{1}) = T (e_{1}) = = T (e_{s}) = 0, we obtain
y = T (x)
= _{
}
which means that T_{ }(e_{s}_{+1}), T (e_{s}_{+2}), , T_{ }(e_{n}) generates the subspace ImT
We must proove that the vectors T(e_{s}_{+1}) , , T_{ }(e_{n}) are linearly dependent. Hence,
l_{s}_{ } T_{ }(e_{s}_{+1}) + + l_{n} T_{ }(e_{n}) = 0 T (l_{s}_{ } e_{s}_{+1} + + l_{n }e_{n}
which means that l_{s}_{ } e_{s}_{+1} + + l_{n }e_{n} I Ker T. Moreover, as the subspace Ker T has only the null vector in common with the supplementary subspace, we obtain:
l_{s}_{ } e_{s}_{+1} + + l_{n }e_{n} = 0 T l_{s}_{ } = = l_{n}
that is, T_{ }(e_{s}_{+1}) , , T_{ }(e_{n}), are linearly independent. Therefore the image subspace Im T is finitely dimensional, moreover
dim Im T = n – s = dimV – dim Ker T .
For s = 0 (K er T = and dim Ker T = 0), we consider the base B = belonging to the vector space V and by following the same reasoning we obtrain that T_{ }(e_{1}) , T_{ }(e_{2}) , , T_{ }(e_{n}) represents a base of the space Im T meaning Im T = dim V . (q.e.d)
The property of linear dependence of a vector sistem is maintained in a linear transformation, opposed to the linear independency which generally will not be conserved by applying a linear transformation. The conditions that maintain the linear independence of a vector system, are given by the following theorem:
1.7 Theorem. 
If V is a ndimensional vector space, and T V W a linear transformation, then the following relations are equivalent : 1) T is injective 2) The image of a linearly independent vector system e_{1}, e_{2}, , e_{p} I V (p n) is a system of vectors T_{ }(e_{1}), T_{ }(e_{2}) , , T_{ }(e_{p}), which are also linearly independent. 
Proof. 1) 2) Let us cosider a injective linear transformation T, and the linearly independent vectors e_{1}, e_{2}, , e_{p} and T_{ }(e_{1}), T_{ }(e_{2}) , , T_{ }(e_{p}) the images of these vectors.
For l_{i} I K, _{
}
l_{ } T_{ }(e_{1}) + l_{ } T_{ }(e_{2}) + + l_{p} T_{ }(e_{p}) = 0
T l_{ }e_{ } l_{ }e_{ } l_{p}e_{p}
l_{ }e_{ } l_{ }e_{ } l_{p}e_{p} I K er T (T  injectiva
l_{ }e_{ } l_{ }e_{ } l_{p}e_{p} l_{ } l_{ } l_{p}
therefore the vectors T_{ }(e_{1}), , T_{ }(e_{p}) sunt linearly independent.
1) Suppose there is a
vector x 0 for that its image T (x)
= 0. If B V is a base, then there
exist x_{i} I K, not all null, so that x
=_{
}
1.8 Consequence. 
If V and W are two finitely dimension vector spaces, and T V W is a linear transformation, then : 1) If T is injective, and we consider a base of V then is a base of ImT 2) Two isomorph vector spaces have the same dimension. 
The demonstration can be written directly using the results from the theorems 1.6 and 1.7.
§2. The Matrix of a Linear Transformation
Let V and W two vector spaces of the corp K.
2.1 Theorem. 
If B = is a base of the vector space V and are n arbitrary vectors from W then there
is a linear transformation T: V
W with the property T(e_{i})
= w_{i } _{
} 
Proof. Let us consider x I V a vector that can be written in base B as _{
}
a x b y = _{
}
has the image given by T
T ax by)=_{
}
Lets suppose that
there T V
W
with the property T (e_{i}) = w_{i }, _{
}
T (x) = T _{
}
= T _{
}
that is the oneness of the linear transformation T .
If W are linearly independent, then the linear transformation T defined i theorem 2.1 is injective.
Theorem 2.1 states that a linear transformation T V W, dimV = n, is perfectly determined, if it’s images are known over the vectors of a basis B V.
Let
V_{n} and W_{m} be two Kvector
spaces of dimension n respectively m, and let T:
V W be a linear transformation. If B = is a fixed base in V_{n}
, and B = is a
fixed base in W_{m} then the
linear transformation T is uniquely determined by the values T (e_{j})IW_{m}.
For j = _{
}
T (e_{j}) = _{
}
The coefficients a_{ij}
I K, i = _{
}
2.2 Definition. 
The matrix A I M_{m}_{ }_{n} (K) who’s elements are given by the relation ( ) is called the associated matrix to the linear transformation T with regard to the pair of bases B and B 

2.3 Theorem. 
If x = _{
} y_{i}
=_{
} 
Truly, T(x) = T _{
} _{
} resulting the relation (2.2) If we note X = ^{t}(x_{1}, x_{2}, , x_{n}), Y = ^{t}(y_{1}, y_{2}, , y_{m}) then relation (2.2) is also written as a matrix equation of the following form: Y = AX (2.3) The equation (2.2) or (2.3) is called the equation of the linear transformation T with regard to the considered bases. Remarks: 1° If L (V_{n}, W_{m}) is the set of all linear transformation from V_{n} with values in W_{m} and M_{m}_{ n} (K) is the set of all matrices of type m n, and B and B are two fixed bases in V_{n} respectively W_{m}, then the corespondence Y : L (V_{n}, W_{m}) M_{m}_{ }_{n} (K) Y (T) = A, which associates the matrix A to a linear transformation T relative to the two fixed bases, is a vector space isomorphism. Consequently dim L_{ }(V_{n}, W_{m}) = m n. 2° This isomorphism has the following properties: Y (T_{ 1} T_{ 2} Y T_{ 1} Y (T_{ 2}), if T_{ 1} and T_{ 2 } exist  T V_{n} V_{n} is invertable iff the associated matrix A, with regard to some base from V_{n}, is invertable. Let V_{n} be a Kvector space and TI End(V_{n}). Considering different bases in V_{n}, we can associate separate square matrices to the linear transformation T. Naturally, one question rises: when do these two square matrices represent the same endomorphism? The answer is given by the following theorem
Proof. Let B = and B = two bases in V_{n} and W w_{ij}) the passing matrix from base B to base B , hence e _{j} _{
} If A = (a_{ij})
is the associated matrix of T relative to the base B, hence T
(e_{j}) = _{
} T(e _{j} _{
} T(e^{’}_{j}) =
T_{
} Out of the two expressions we obtain
_{
} W being a nondegenerated matrix, results the relation A W^{ }AW
Remarks: ° The similarity relation is an equivalence relation on the set M_{n} (K). Every equivalence class corresponds to an endomorphism T I End(V_{n}) and contains all associated matrices of T relative the the bases of the vector space V_{n}. 2° The matrices A and B have the same determinant detB = det(C^{1}) detA detC = det A Any two similar matrices have the same rank, number which represents the rank of the endomorphism T . Therefore the rank of an endomorphism does not depend on the chosen base of the vector space V (the rank of an endomorphism is invariant to base change). §3. Eigenvectors and Eigenvalues Let V be an ndimensional Kvector space and T I End(V) an endomorphism. If in the vector space V we consider different bases, then to
the same endomorphism T
I End(V) several different similar matrices will correspond. Therefore,
we are interested in finding the base of V,
relative to whom the associated matrix of the endomorphism T has the simplest form, the canonical form. In
this case the relations that define the endomorphism T , y_{i}
= _{
}
The set of all eigenvalues of T is called the spectrum of the operator T and it is noted with s T The equation T (x) = l x with x 0 is equivalent to x I Ker(T  lI , where I is the identity endomorphism. If x is an eigenvector of T , then the vectors kx_{ }, k I K are also eigenvectors.
Proof. 1) Let us consider x 0 an eigenvector corresponding to the eigenvalue l I K. Suppose there is another eigenvalue l I K corresponding to the same eigenvector such that T (x) = l x then it would result lx l x l l )x = 0 l l 2) Let x_{1}, x_{2}, , x_{p} be the eigenvectors corresponding to the distinct eigenvalues l_{ } l_{ } l_{p}. We show using induction (p) the linear independence of the considered vectors. For p = 1 and x_{1} 0, as an eigenvector, the set is linearly independent. Supposing that the property is true for p1, we show that it is true for p eigenvectors. Applying the endomorphism T to the relation k_{1}x_{1 }+ k_{2}x_{2 }+ + k_{p}x_{p} = 0 we obtain k_{1}l_{ }x_{ } + k_{2}l_{ }x_{ } + + k_{p}l_{p}x_{p} = 0. By substracting the first relation amplified by l_{p}, we obtain the following from the second relation: k_{ } l l_{p})x_{1} + + k_{p}_{1}(l_{p} l_{p}_{ })x_{p}_{1} = 0 The inductive hypothesis prooves k_{1} = k_{1} = = k_{p}_{1} = 0, and if we use this in the relation k_{1}x_{1} + + k_{p}_{1}x_{p}_{1} + k_{p}x_{p} = 0 we obtain k_{p}x_{p} = 0 k_{p }= 0 , that is x_{1}, x_{2}, , x_{p} are linearly independent. 3) For any x, y I S_{l } and a b I K we have : T ax by aT (x) + bT (y) = alx bly l ax + by), which means that S_{l} is a vector subspace of V. For x I S_{l}_{ }, then T (x) = lx I S_{l} , followingly T (S_{l})_{
}
Proof. Let l_{ } l_{ } I s T l_{ } l_{ }. Suppose there
exists a vector x I _{
}
The matrix equation AX = lX can be written under the form (A  lI )X = 0 and it is equivalent to the sistem of linear and homogenous equations: _{
} which admits solutions different than the trivial solution if P l) = det(A  lI ) = _{
}
P l) = (1)^{n} [l^{n} d_{ }l^{n1} + + (1)^{n}d_{n} (3.4) where d_{i} is the sum of the principal minors of order i belonging to the matrix A. Remarks ° The solutions of the caracteristic equations det(A  lI ) = 0 are the eigenvalues of the matrix A. ° If the field K is a closed field, then all roots of the caracteristic equations are in the field K therefore the corresponding eigenvectors are also in the Kvector space M_{n}_{ }(K). If K is open, i.e. K = R, the caracteristic equation may have also complex roots and the corresponding eigenvectors will be included in the complexified real vector space. For any real and symmetrical matrix, it can be proven that the eigenvalues are real. 3° Two similar matrices have the same caracteristic polinom. Truly, if A and A are similar, A = C^{1}AC with C nedegenerated, then P l) = det(A lI ) = det(C^{1}AC  lI ) = det[C^{1}(A  lI)C] = = det(C^{1}) det(A  lI) detC= det(A  lI) = P(l If A I M_{n}(K) and P(x) = a_{0}x^{n} + a_{1}x^{n}^{1} + + a_{n} I K[X] then the polinom P(A) = a_{0}A^{n} + a_{1}A^{n}^{1} + + a_{n}I is named matrix polinom.
Proof. Let us consider P(l) = det(A  lI) = a_{0}l^{n} + a_{1}l^{n}^{ } + + a_{n}. Likewise the reciproc of the matrix A  lI is given by (A  lI)* = B_{n}_{1}l^{n}^{ } + B_{n}_{2}l^{n}^{ } + + B_{1}l + B_{0}, B_{i} I M_{n}(K) and satisfies the relation (A  lI (A  lI)* = P(l I , hence (A  lI) (B_{n}_{1}l^{n}^{ } B_{n}_{ }l^{n}^{ } + B_{1}l + B_{0}) = (a_{0}l^{n} a_{ }l^{n}^{ } + a_{n})I, By identrifying the polinoms trough l , we obtain
Now let us consider an ndimensional Kvector space V_{n} , and a base B and lets not with A I M_{n}(K), the associated matrix to the endomorphism T with regard to this base. The equation T x lx is equivalent to (A  lI )X = 0. The eigenvalues of the endomorphism T, if they exist, are the roots of the polinom P(l)in the field K, and the eigenvectors of T will be the solutions to the matriceal equations (A  lI )X = 0. Because of the invariance of the caracteristic polinom regarding a base change from V_{n} , P(l) depends only on the endomophism T and does not depend on the matrix representation of T in a given base. Therefore, the naming of the caracteristic polinom / caracteristic equation of the endomorphism T are well justified, respective to the caracteristic polinom P(l) of the matrix A, and for the caracteristic equation (A  lI )X = 0 of the matrix A. §4. An Endomorphism’s Canonical Form. Let us consider the endomorphism T V_{n} V_{n} defined on the ndimensional Kvector space V_{n}. If we consider the bases B si B belonging to vector space V_{n} and we denote the associated matrices of the endomorphism T with respect to these bases with A and A , then we have A W^{ }AW, where W represents the passing matrix from base B to base B . Knowing that the associated matrix to an endomorphism depends on the chosen base from the vector space V_{n}, we will determine that particular base which with regard to the endomorphism has the simplest form, that means that the associated matrix will have a diagonal form.
Proof: If T is diagonalizable then there
exists a base B =
with respect to whom the associated matrix A
= (a_{ij}) has diagonal form,
meaning a_{ij} = 0,
i
j.
The actoion of T on the elements
on base B is given by the relations T (e_{i}) = a_{ii}e_{i} ,
i _{
} Reciproc. Let a base in V_{n} , formed only of
eigenvectors, that is T v_{i}
= l_{i}v_{i} , i = _{
} From these equations we can construct the associated matrix corresponding to T in the current base: D = _{
} where the scalars l_{i} I K are not necessarily distinct. In the context of the previous theorem, the matrices from the similarity classes which correspond to the diagonalizable endomorphism T , are called diagonalizable for the different bases referring to the vector space V_{n}.
An eigenvalue l I K , as the root of the caracteristic equation P(l) = 0, has a multiplicity order which is named algebric multiplicity, and the dimension of the eigensubspace to dimS_{l} is named the geometric multiplicity of the eigenvalue l
Proof: Let the eigenvalue l_{ } I K have the algebric
multiplicity m n ,
and the corresponding eigensubpace _{
} If p = n, then we have n linear independent eigenvalues, therefore a base in V_{n} reported to whom the associated matrix of the endomorphism T has diagonal form, and it’s main diagonal having the eigenvalue l_{ }. In this case, the caracteristic polinom is written in the following form : P(l) = (1)^{n} (l l_{ })^{n} , resulting p = n. If p
< n , we upgrade the base of the
vector subspace _{
} The action of the operator T on the elements of this base is given by : T (e_{i}) = l_{ }e_{i } i = _{
} T (e_{j})
= _{
} With
regard to the base _{
}
_{
} hence the caracteristic polinom T has the form P(l l_{ } l)^{p} Q(l) , meaning that (l_{ } l)^{p} devides P(l) , then p m, q.e.d.
Proof. If T is diagonalizable, then there is a base B V_{n }, formed only by eigenvectors, with regard to whom the associated matrix has the diagonal form. In these conditions the caracteristic polinom is written in the following form: P l) = (1)^{n} _{
} with l_{i} I K
, the eigenvalues of T, with the multiplicity orders m_{i} , _{
} Reciproc. If l_{i} I K and dim_{
} Because _{
} A being a diagonal matrix, we conclude that the endomorphism T este diagonalizable.
Practically, the required steps for diagonalizing and endomorphis T are the following: 1° We wirte the matrix A , associated to the endomorphism T with regard to a given base of the vector space V_{n}. 2° We solve the caracteristic equation det(A  lI ) = 0, determing the eigenvalues l_{ } l_{ } l_{p} with the corresponding multiplicity orders m_{1}, m_{2}, , m_{p}. 3° We use the result in theorem 3.13 and we have the following cases: I) If l_{i} I K i = _{
} a) If dim_{
} We can check this result by constructing the matrix T = , having as columns the coordonates of the eigenvectors (the diagonalizing matrix) and represents the passing matrix from the base considered initially to the base formed of eigenvectors, where the associated matrix of T with regard to the latter base is the diagonal matrix D, given by D = T^{1} AT = _{
} b) if l_{i} I K such that dim_{
} II) If l_{i} K, then T is not diagonalizable. The problem of T ‘s diagonalization can be considered only if the Kvector space V is considered an extension over the field KLet us consider V_{n} a Kvector space and T I End(V_{n}) an endomorphism defined on V_{n}. If A I M_{n}(K) is the associated matrix of T with regard to a base in V_{n} , then A can be diagonalized only if the conditions of theorem 3.13 are fulfilled.. If the eigenvalues corresponding to T belong to field K, l_{i} I K ,
and the geometric multiplicity is different then the algebric multiplicity dim_{
} For l I K, the matrices of the form : l), _{
} are named Jordan cells attached to the scalar l, of the order 1, 2, 3, ,n .
A Jordan cell of order p attached to the eigenvalue l I K , is a multiple of the order m p , and it corresponds to the linearly independent vectors e_{1}, e_{2}, , e_{p} which satisfy the following relations: T(e_{1}) = l e_{1} T(e_{2}) = l e_{2} + e_{1} T(e_{3}) = l e_{3} + e_{2} . . . . . . . . . . . . . . . . . . . . T(e_{p}) = l e_{p} + e_{p1} The vector e_{1} is an eigenvector and the vectors e_{ }, e_{3}, , e_{p} are called main vectors. Remarks ° The diagonal form of a diagonalizable endomorphism is a particular case of a the Jordan canonical form, having all Jordan cells of the first order ° The jordan canonical form is not unique. The order in the main diagonal of the Jordan cells depends on the chosen order of the main vectors and the eigenvectors with regard to the given base. ° The number of the Jordan cells, equal to the number of the linearly independent eigenvectors, and also their orders are unique. The following theorem can be prooven :
Practically, for determining the canonical We write the matrix A ,associated to the endomorphism T with regard to the given base. We solve the caracteristic equation det(A  lI ) = 0 having the eigenvalues l_{ } l_{ } l_{p} with their multiplicity order m_{1}, m_{2}, , m_{p} determined. We find the eigensubspaces _{
} We calculate the
number of Jordan cells, separately for
each eigenvalue l_{i }, given by dim_{
} The corresponding main vectors of those eigenvalues are
determined for which dim_{
} (A  l_{i}I )X_{1} = v, , (A  l_{i}I )X_{p} = X_{s} Keeping in mind the compatibility conditions
and the general form of the eigenvectors and those of the main vectors we will
determine by giving arbitrary values to the parameters, the linearly
independent eigenvectors of_{
} We write the base of the vector space V_{n}, uniting the systems of linear independent
vectors m_{i} , i = _{
} By using the matrix T having as columns the coordonates of the vectors belonging to the base, which were constructed using the associated eigenvectors and main vectors in this order, we obtain the matrix J = T^{1}AT , the Jordan canonical form which contains on the main diagonal the Jordan cells, in the order in which they appear in the constructed base of associated eigen and main vectors (if they exist). The Jordan cells have the order equal to the number of vectors from the system constructed out of the associated eigen and main vectors. §5. Linear transformations on euclidean vector spaces. The properties of the linear transformations defined on an arbitrary vector space, are applicable on the euclidian vector spaces as well. The scalar product that defines the euclidean structure permits us the introduction of particular linear transformation classes. 5.1. Orthogonal transformations Let V and W two euclidean Rvector spaces. Without being in danger of confusion we can denote the scalar products on the two vector spaces with the same symbol < , > .
Examples. ° The identical transformation T: V V, T (x) = x , is an orthogonal transformation. 2° The transformation T : V V, that associates to a vector x I V its inverse T (x) =  x , is an orthogonal transformation.
Proof If T is orthogonal then < T x, T y> = < x, y >, which for x=y becomes < T x, T y> = < x, x>  T x^{2} = x^{2}  T x = x. Reciproc. Using the
relation <a, b> = _{
} < T x T y > = _{
} = _{
} = _{
}
Proof. If T is an orthogonal transformation then  T x =  x  and in the hypothesis T x = 0 results  x  = 0 x = 0 . Followingly, the kernel Ker T = , meaning T is injective. Using the consequence 4.3 we find that by using an orthogonal transformation on a linearly independent system we get also a linearly independent system.
Truly, d(T x, T y T x  T y  =  T (x  y) = x  y = d(x, y). Any orthogonal transformation T, different then the identical transformation is injective, T(0) = 0 is the only fixed point. Conversely, the transformation T is the same with the identical transformation. If W = V , T _{1} and T _{2 }are two orthogonal transformations on V then their composition is also an orthogonal transformation. If T: V V is also surjective, then T is invertable moreover, T is an orthogonal transformation. In these conditions the set of all bijective orthogonal transformation of the euclidian vector space V forms a group together with the composing operation (product) of two orthogonal transformations, named the the orthogonal group of the euclidian vector space V denoted GO(V) and which represents a subgroup of the linear group GL(V). Let us consider in the finitely
dimensional euclidian vector spaces V_{n}
and V_{m}, the orthonormed
bases B = , respectively _{
} With regard to the orthonormed bases
B
and _{
} (T e_{j}) = _{
} The bases B
and _{
} < e_{ }, e_{2
}> = _{ij} , i, j
= _{
} Lets evaluate the scalar product of the images of the vectors of B < T (e_{i}),
T (e_{j})> = _{
} =_{
} By using T ‘s orthogonal transformation property < T (e_{i}), T (e_{j})> = _{ij } we have _{
} ^{ t}A A = I_{n} (5.3) Reciproc. If T is a linear transformation , caracterized (with regard to the
orthonormed bases B and _{
} Truly, let x = (x_{1}, x_{2}, …, x_{n}) and y = (y_{1}, y_{2}, …, y_{n}) two vectors in the space V_{n}. If we calculate the scalar product of the images of these vectors < T (x),
T (y)> = < T _{
} _{
} = _{
} meaning < T (x), T (y)> = < x, y> is an orthogonal transformation. We have also proven the following theorem:
By taking into consideration that det ^{t}A = detA, results that if A is an orthogonal matrix, then detA = The subset of the orthogonal matrices which have the property detA = 1 form a subgroup denoted with SO(n; R) , of the orthogonal matrix group of the nth order name the special orthogonal group. An orthogonal transformation caracterized by the matrix A I SO(n; R) is also called rotation. Remarks Since an orthogonal transformation is
injective, and if B V_{n}, is a base, then it’s image trough the orthogonal transformation T:
V_{n}
W_{m }, is a base in Im T_{
} ° Any orthogonal transformation between two euclidian vector spaces of the same dimension is an isomorhpism of euclidian vector spaces. 3° The associated matrix of two orthogonal transformations T _{ }, T _{2} : Vn Vn with regard to an orthonormed base, is given by the product of the associated matrices correspondin to the transformations T_{ } and T_{ }. Thus, the orthogonal group of the euclidian vector space V_{n, }group GO(V_{m}), is isomorph with the multiplicative group GO(n; R), of the orthogonal matrices of the nth order. 5.2 Symmetrical linear transformations Let V and W two euclidian Rvector spaces and T: V W a linear transformation.
Let V_{n} a finitely dimensional euclidian Rvector space and an orthonormed base B V_{n} . If the endomorphism T : V_{n} V_{n} is caracterized by the real matrix A I M_{n}(R), we can easily proove that to any symmetrical (antisymmetrical) endomorphism corresponds a symmetrical (antisymmetrical) matrix, with regard to an orthonormed base.
Proof. Let l_{ } an arbitraty root of the caracteristic equation det(AlI ) = 0 and X = ^{t}(x_{1}, x_{2}, …, x_{n}) and eigenvector that corresponds to the eigenvalue l_{ } Denoting with _{
} Since
A is symmetrical, the first member of
the equality is real, because _{
} Also, ^{t}XX 0 is a real number and l_{ } is the quotient of two real numbers, hence it is real. Let the eigenvector v_{1} I R correspond to the eigenvalue l_{ } I R , and S_{1} the subspace generalted by v_{1} and V_{n}_{1} V_{n} , it’s orthogonal complement, V_{n} = S_{1} V_{n}_{1}.
Proof. If v_{ } I V_{n} is an eigenvector corresponding the eigenvalue l_{ }, then T v_{ } l_{ } v_{ } . For v I V_{n}_{1} we have satisfied the relation < v_{1}, v > = 0. Let us proove that T (v) I V_{n}_{1}. < v_{1}, T v > = < T v_{ }, v > = < l, v_{1}, v > = l_{ }< v_{1}, v > = 0. Based on this result the next theorem can be easily prooven.
Proof. Let _{
} < w, T (v) > = l_{i} < w, v> si < T (w), v> = l_{j} < w, v> . Since T is symmetrical we have (l_{j} l_{i}) < w, v> = < T (w), v>  < w, T (v)> = 0 . Thus l_{j} l_{i} T < w, v > = 0 . q.e.d.
Truly, the first m_{i} vectors belong to the
eigensubspace _{
} With regard to the orthonormed base of these eigenvectors the endomorphism T:V_{n} V_{n} has canonical form. The matrix associated to the endomorphism T is this base is diagonal, having on the main diagonal eigenvalues, written as may times as their multiplicirty order allows and are expressed in function of the associated matrix A corresponding to the endomorphism T in the orthonormed base B , trought the relation D = ^{t}W AW , where W is the othogonal passing matrix form base B to the base formed out of eigenvectors. 5.3 Isometrical Transformations on punctual euclidian spaces. Let E=(E,V j) a punctual euclidian space, E being the support set, V the director vector space and j the affine structure function.
If E is endowed with a certain geometrical structure, and f satisfies certain conditions referring to this structure, then f will be named geometrical transformation. By denoting (s_{E}_{ }_{ }) the transformation group of the set E with regard to the composing operation of the functions, and G a subgroup of his, then the pair (E, G ) will be named the geometric space or the space iwth a fundamental group. A subset of points F E is named a figure of the geometric space (E, G ).
Two figures F_{1},F_{2 } E are
called congruent if _{
} Let E_{2} = (E_{2}, V_{2} , j) be a bidimensional puctual euclidian space. The application r_{ }_{a} : E_{2 } E_{3} , r_{ }_{a} (P) = P’ , with the following properties: d (O,P’)=d (O,P’) and POP’ = a is called center rotation O and the angle a The associated
linear transformation T : V_{ }
V_{ } , T (_{
} In the euclidian space’s geometry we are foremost interested in those geometrical transformations which conserve certain properties of the figures in the considered space. With other words, we will consider certain subgroups of the euclidian space affinity group which govern these transformations
If
we consider the representant of the vector _{
} d ( _{
} If the punctial space E_{3},_{ }is the space of the euclidian geometry, then an isometry f : E_{3} E_{ } is a bijective application , hence a geometrical transformation named isometric transformation. The subset of the isometrical transformations, with regard to the composing operation of the appliactions forma a subgroup Izo E, called the group of isometries.
The geometrical transformations given in the previous examples (central symmetry, axial symmetry, translation and rotation) are isometrical transformations.
Let
us consider the euclidian plane PE = ( E_{2}
, IzoE_{2} ), and R
= ( O,_{
} g : E_{2} E_{2} an isometry with O as a fixed point. Consider the point A_{(1,0)} and some arbitraty point M_{(x,y)} having the image A’_{(a,b)} and M’_{(x’,y’)} respectively. With the conditions : d (O,A’) = d (O,A) , d (O,M’) = d (O,M) and respectively d (A’,M’) = d (A,M) we obtain the equation systems _{
} The general solution is: _{
} Reciprocally, the formulae (5.8) represent an isometry. Truly, if M_{1} and M_{2} are two arbitrary points and their images are M’_{1 }and M’_{2} respectively, then d (M’_{1},M’_{2})^{2} = (x’_{2 } x’_{1})^{2} +(y’_{2 } y’_{1})^{2} = = (a^{2}+b^{2}) (x_{2}x_{1})^{2} +(y_{2}y_{1})^{2} d (M_{1},M_{2})^{2} , hence , d (M’_{1},M’_{2}) = d (M_{1},M_{2}) . The fixed points of the caracterized by the equations (5.8) are obtained in (5.8) x’ = x and y’ = y , hence _{
} The system (5.9) is a system of linear equations and homogenous, and has the determinant D e +1) (1a) . If e = 1 the system (5.9) admits an infinity of fixed points, hence the equations (5.8) represemt an axial symmetry. If e = 1 and a 1 the isometry (5.8) admits only the orifin as a fixed point and represents a rotation, and for e = 1 and a = 1 the application (5.8) becomes an identical transformation. The equations (5.8) can be written under the matrix form as follows: _{
} The associated
matrix A = _{
} For e = 1 (considering the identical application as having the rotation of angle a = 0) the orthogonal matrix A has the property det.A=1 , which means that the subgroup of the rotations of the euclidian plane is isomorph with the orthogonal special subgroup SO(2;R). By combining the isometries, having the origin as a fixed point , together with the translations which transform the origin into the point (x_{o},y_{o}) we obtain the isometries of the euclidian plane PE = ( E_{2} , IzoE_{2} ) ,caracterized analytically by the equations: _{
} Using the trigonometrical functions, and considering a = cosj and b = sinj jIR the equations (5,10) are written in the following form: _{
} For e=1 , the transformations caracterized by the equations _{
} meaning those isometries represented by the composing of a rotation and a translation ( movements of the plane ), form a subgroup called the subroup of movements. In the tridimensional punctual euclidian punctual space E_{ } (E_{3},V j The isometries T : E_{3} E_{3} , T(x_{1},x_{2},x_{3}) = (y_{1},y_{2},y_{3}) , are actually the transformations caracterized by the equations _{
} where the matrix A = (a_{ij}) , i,j = _{
} 1. Find out which of the following applications are linear transformations?
Fixed, is linear . Determine Ker T , Im T and verify the rank theorem.
Show that R^{3} = _{
}
A= _{
} A transformation with this property is called tangent structure. 10. Show that the transformation T : R^{2n} R^{2n} defined by the relation T(x) =(x_{n+1},x_{n+2},…,x_{2n},x_{1},x_{2},…,x_{n}) has the property T^{2 }=  Id._{R}2n A transformation with this propery is called complex structure. 11. Determine the linear transformation which transformes the point (x_{1,}x_{2},x_{3}) I R^{3} in it’s symmetric to the plane x_{1} +x_{2} + x_{3} = 0 and show that the newly determined transformation is orthogonal. 11. Determine the eigenvectors and the eigenvalues for the endomorphism T: R^{3 } R^{ } caracterized by the matrix A=_{
} 12. Study the possibility of reducing to the canonical form, if yes, find the diagonalizing matrix for the endomorphisms with the following associated matrices: A=_{
} Determine the Jordan canonical form for the following matrices: A = _{
} Determine an orthonormed base in the R^{3} space where the endomorphism T : R^{3} R^{ } , T(x) = (x_{1 }+x_{2}4x_{3}, 2x_{1}4x_{2}2x_{3}, 4x_{1}2x_{2}x_{3}) admits the canonical form. Using the HamiltonCayley theorem, calculate A^{1} and P(A), P(x) = x^{4} + x^{3} +x^{2} +x + 1, for the following matrices: A = _{
}
DISTRIBUIE DOCUMENTUL
Comenteaza documentul:Te rugam sa te autentifici sau sa iti faci cont pentru a putea comentaCreaza cont nou Termeni si conditii de utilizare  Contact Distribuie URL Adauga cod HTML in site 