Scrigroup - Documente si articole

     

HomeDocumenteUploadResurseAlte limbi doc
BulgaraCeha slovacaCroataEnglezaEstonaFinlandezaFranceza
GermanaItalianaLetonaLituanianaMaghiaraOlandezaPoloneza
SarbaSlovenaSpaniolaSuedezaTurcaUcraineana

AdministrationAnimalsArtBiologyBooksBotanicsBusinessCars
ChemistryComputersComunicationsConstructionEcologyEconomyEducationElectronics
EngineeringEntertainmentFinancialFishingGamesGeographyGrammarHealth
HistoryHuman-resourcesLegislationLiteratureManagementsManualsMarketingMathematic
MedicinesMovieMusicNutritionPersonalitiesPhysicPoliticalPsychology
RecipesSociologySoftwareSportsTechnicalTourismVarious

Linear transformations - Definition and General Properties

Mathematic



+ Font mai mare | - Font mai mic



Linear transformations



Together with the fundamental notion of vector space, the linear transformation is another basic concept of linear algebra. This represents the information carrier regarding the linearity property from one vector space to another.

The large scale use of this notion in geometry requires therefore a detailed treatment of the subject.

1. Definition and General Properties

Let V and W be two vector spaces over the commutative corp K.

1.1 Definition.

A function T V W, with the following properties:

) T (x+y) = T (x) +T (y) , x, y I V

2) T ax = a T (x) , x I V , a I K

is called a linear transformation (linear application, linear operator or vector space morphism).

For the mapping T(x) of a linear transformation T the writing Tx is also used sometimes

1.2 Corollary.

The application T V W is a linear transformation if and only if

T ax+by a T(x) + b T (y) , x, y I V, a b I K 

The proof can be written directly, and condition (1.1) shows that an application T V W is a linear transformation, if the image of a linear combination of vectors is the linear combination of the images of these vectors.

Examples

1 The application T: Rn Rm , T (x) = AX, A I M m n(R) , X = tx is a linear transformation. In the particular case n = m = 1 , the application defined by T (x) = ax, a I R, is linear.

2 If U V is a vector subspace, then the application T: U V, defined by T(x) = x, is a linear transformation, named inclusion application. In general, the restriction of a linear transformation to a subset S V is not a linear transformation. The linearity is inherited only by vector subspaces.

3 The application T: C1 (a, b) C0(a,b) , T (f) = f is linear.

4 The application T: C0 (a, b) R, T f) = is linear.

5 If T V W is a bijective linear transformation, then T W V is a linear transformation.

The set of linear transformations from the V vector space to the W vector space is denoted with L(V, W).

If we define on the set L(V, W) these operations:

(T1 + T 2) (x) = : T 1(x) + T 2(x), x I V

aT) (x) = : a T (x) , x I V, a I K

then L(V, W) aquires a K-vector space structure.

A vector space isomorphism is a bijective linear transformation T: V W.

If W = V, then the linear application T: V V will be named the endomorphism of the vector space V, and its set will be denoted by End(V). For T1, T2 I End(V), being two endomorphisms of V , we can write the following line:

T 1 T )(x) = : T1(T (x)) , x I V

The upper operation will be named the product of the transformations T1 si T2, shortly T 1T 2.

A bijective endomorphism T V V will be named automorphism of the V vector space, and its set will be denoted by Aut(V).

The automorphism subsets of a vector space are stable parts with regard to the endomorphism product operation, and they form a subgroup GL(V) End(V), also called linear group of the vector space V.

If W = K, then the linear application T: V K, is named , and the set V* = L( V, K), of all linear forms of V, is a K-vector space named the vector space V `s dual.

If V is an euclidian vector space, of finite dimensions, then its dual or V* has the same dimension and its identified with V.

1.3 Theorem.

If T V W is some linear transformation, then :

a) T (0v) = 0w , T (-x) = - T(x), x I V

b) The image T (U) W, of a vector subspace U V, is also a vector subspace.

c) The inverted image T-1 (W V , of a vector subspace W W , is also a vector subspace.

d) If the vectors x1, x2 ,, xn I V are linearly dependent then the vectors T (x1), T (x2) ,, T (xn) I W are also linearly dependent.

Proof. a) By giving a the value a = 0 , and a in the T (ax a T (x) equation, we obtain T (0v) = 0w and T (-x) = - T (x).

Therefore we will ignore the two vectors 0 indexes.

b) For u, v I T (U) , x, y I U such that u = T (x) and v = T (y). According to the hypothesis that U V is a vector subspace, for x, y I U, a b I K we have ax by I U and with relation (1.1), we obtain

au bv a T (x) + b T (y) = T(ax + by I T (U).

c) If x, y I T -1(W ) then T (x) , T (y) I W and for a b I K we have a T (x) + b T(y) = T (ax by I W (W vector subspace), we get a x b y I T -1(W ) as a result.

d) We apply the transformation T onto the relation of linear dependence l x l x lnxn = 0 and by using a), the linear dependency relation will be l T (x1) + l T (x2) + + ln T (xn) = 0.

1.4 Consequence

If T V W is a linear transformation, then :

a)      The set Ker T = T -1 = V is called the kernel of the linear transformation T , and it is a vector subspace.

b)      The image of the linear transformation T, Im T = T (V) W, is a vector subspace.

c)      If T(x1) , T(x2) , , T(xn) I W are linearly independent then the vectors x1, x2 ,, xn I V are also linearly independent.

1.5 Theorem.

A linear transformation T V W is injective if and only if Ker T = .

Proof. The injectivity of the linear transformation T combined with the general property T (0) = 0 implies Ker T = .

Reciproc. If Ker T = and T(x) = T(y), using the linearity property we obtain T (x - y) = 0 x y I Ker T , that is x = y , so T is injective.

The nullity of the operator T is the dimension of the kernel Ker T.

The rank of the operator T is the dimensioning of the image Im T.

1.6 Theorem.

(rank theorem) If the vector space V is finitely dimensional then the vector space ImT is also finitely dimensional, and we have the relation :

dim Ker T+ dim Im T = dim V 

Proof. Lets note with n = dim V and s = dim Ker T For s 1 let us consider a base in Ker T and complete it with B = so that B is a base of the entire vector space V. The vectors es+1 es+2 en represent a base of the supplementary subspace of the subspace Ker T

For any y I Im T, there is x = I V, such that y = T (x)

Due to the fact that T (e1) = T (e1) = = T (es) = 0, we obtain

y = T (x) = T (ei) = xs+1 T (es+1) + + xn T (en),

which means that T (es+1), T (es+2), , T (en) generates the subspace ImT

We must proove that the vectors T(es+1) , , T (en) are linearly dependent. Hence,

ls T (es+1) + + ln T (en) = 0 T (ls es+1 + + ln en

which means that ls es+1 + + ln en I Ker T. Moreover, as the subspace Ker T has only the null vector in common with the supplementary subspace, we obtain:

ls es+1 + + ln en = 0 T ls = = ln

that is, T (es+1) , , T (en), are linearly independent. Therefore the image subspace Im T is finitely dimensional, moreover

dim Im T = n s = dimV dim Ker T .

For s = 0 (K er T = and dim Ker T = 0), we consider the base B = belonging to the vector space V and by following the same reasoning we obtrain that T (e1) , T (e2) , , T (en) represents a base of the space Im T meaning Im T = dim V . (q.e.d)

The property of linear dependence of a vector sistem is maintained in a linear transformation, opposed to the linear independency which generally will not be conserved by applying a linear transformation. The conditions that maintain the linear independence of a vector system, are given by the following theorem:

1.7 Theorem.

If V is a n-dimensional vector space, and T V W a linear transformation, then the following relations are equivalent :

1) T is injective

2) The image of a linearly independent vector system e1, e2, , ep I V (p n) is a system of vectors T (e1), T (e2) , , T (ep), which are also linearly independent.

Proof. 1) 2) Let us cosider a injective linear transformation T, and the linearly independent vectors e1, e2, , ep and T (e1), T (e2) , , T (ep) the images of these vectors.

For li I K, we get

l T (e1) + l T (e2) + + lp T (ep) = 0

T l e l e lpep

l e l e lpep I K er T (T - injectiva

l e l e lpep l l lp

therefore the vectors T (e1), , T (ep) sunt linearly independent.

1) Suppose there is a vector x 0 for that its image T (x) = 0. If B V is a base, then there exist xi I K, not all null, so that x = . Since T (e1), T (e2), , T (en) are linearly independent, based on the relation T (x) = x1T (e1) + x2 T (e2) + + xn T (en) = 0 , results that x1 = x2 = = xn = 0 x = 0, contradiction. We obtain K er T , hence it is injective.

1.8 Consequence.

If V and W are two finitely dimension vector spaces, and T V W is a linear transformation, then :

1) If T is injective, and we consider a base of V then is a base of ImT

2) Two isomorph vector spaces have the same dimension.

The demonstration can be written directly using the results from the theorems 1.6 and 1.7.

2. The Matrix of a Linear Transformation

Let V and W two vector spaces of the corp K.

2.1 Theorem.

If B = is a base of the vector space V and are n arbitrary vectors from W then there is a linear transformation T: V W with the property T(ei) = wi , and it is unique.

Proof. Let us consider x I V a vector that can be written in base B as . The corespondence defines an application T: V W, with the property T(ei) = wi , . This application is linear. Truly, if is another arbitrary vector from V, then the linear combination

a x b y = , a b I K,

has the image given by T

T ax by)= =a +b = a T (x) + b T (y), hence T is linear.

Lets suppose that there T V W with the property T (ei) = wi , . Accordingly, for any x I V , x = we obtain:

T (x) = T = T (ei) = = T (ei) =

= T = T (x),

that is the oneness of the linear transformation T .

If W are linearly independent, then the linear transformation T defined i theorem 2.1 is injective.

Theorem 2.1 states that a linear transformation T V W, dimV = n, is perfectly determined, if its images are known over the vectors of a basis B V.

Let Vn and Wm be two K-vector spaces of dimension n respectively m, and let T: V W be a linear transformation. If B = is a fixed base in Vn , and B = is a fixed base in Wm then the linear transformation T is uniquely determined by the values T (ej)IWm. For j = we associate the n-uple (a1j, a2j, , amj) to the image T (ej), that is

T (ej) = , j (2.1)

The coefficients aij I K, i = , j = , define uniquely the matrix A = (aij I Mm n (K). If we consider the vectors Vn si Wm and the bases B si B fixed, then the matrix A I Mm n (K) determines the linear transformation T uniquely.

2.2 Definition.

The matrix A I Mm n (K) whos elements are given by the relation ( ) is called the associated matrix to the linear transformation T with regard to the pair of bases B and B

2.3 Theorem.

If x = has the image y = T (x) = , then

yi = , i =   (2.2)

Truly,

T(x) = T T (ej)=

= ,

resulting the relation (2.2)

If we note X = t(x1, x2, , xn), Y = t(y1, y2, , ym) then relation (2.2) is also written as a matrix equation of the following form:

Y = AX (2.3)

The equation (2.2) or (2.3) is called the equation of the linear transformation T with regard to the considered bases.

Remarks:

1 If L (Vn, Wm) is the set of all linear transformation from Vn with values in Wm and Mm n (K) is the set of all matrices of type m n, and B and B are two fixed bases in Vn respectively Wm, then the corespondence

Y : L (Vn, Wm) Mm n (K) Y (T) = A,

which associates the matrix A to a linear transformation T relative to the two fixed bases, is a vector space isomorphism. Consequently dim L (Vn, Wm) = m n.

2 This isomorphism has the following properties:

Y (T 1 T 2 Y T 1 Y (T 2), if T 1 and T 2 exist

- T Vn Vn is invertable iff the associated matrix A, with regard to some base from Vn, is invertable.

Let Vn be a K-vector space and TI End(Vn). Considering different bases in Vn, we can associate separate square matrices to the linear transformation T. Naturally, one question rises: when do these two square matrices represent the same endomorphism? The answer is given by the following theorem

2.4 Theorem.

Two matrices A, A I Mn (K), relative to the bases B B Vn , represent the same linear transformation T : Vn Vn iff A W A W. The matrix W is the passing matrix from base B to the base B

Proof. Let B = and B = two bases in Vn and W wij) the passing matrix from base B to base B , hence e j j =

If A = (aij) is the associated matrix of T relative to the base B, hence T (ej) = , j = and A = (a ij) the associated matrix of T relative to the base B , hence T (e j , j = , then the images T (ej) can be written in two ways:

T(e j respectively

T(ej) = T T ei

Out of the two expressions we obtain WA A W

W being a nondegenerated matrix, results the relation A W AW

2.5 Definition.

Two matrices A, B I Mn (K) are called similar if there is a nondegenerated matrix C I Mn (K) such that B = C-1 A C .

Remarks:

The similarity relation is an equivalence relation on the set Mn (K). Every equivalence class corresponds to an endomorphism T I End(Vn) and contains all associated matrices of T relative the the bases of the vector space Vn.

2 The matrices A and B have the same determinant

detB = det(C-1) detA detC = det A

Any two similar matrices have the same rank, number which represents the rank of the endomorphism T . Therefore the rank of an endomorphism does not depend on the chosen base of the vector space V (the rank of an endomorphism is invariant to base change).

3. Eigenvectors and Eigenvalues

Let V be an n-dimensional K-vector space and T I End(V) an endomorphism.

If in the vector space V we consider different bases, then to the same endomorphism T I End(V) several different similar matrices will correspond. Therefore, we are interested in finding the base of V, relative to whom the associated matrix of the endomorphism T has the simplest form, the canonical form. In this case the relations that define the endomorphism T , yi = , will have the simplest expressions. We will solve this problem with the help of the values and the vectors proper to the endomophism T.

3.1 Definition.

Let V a K-vector space and T I End(V) an endomorphism.

A vector x I V, x 0 is called eigenvector of the endomorphism T V V if there is a l I K such that

T(x) = lx

The scalar l I K is called the eigenvalue of T corresponding to the eigenvector x.

The set of all eigenvalues of T is called the spectrum of the operator T and it is noted with s T

The equation T (x) = l x with x 0 is equivalent to x I Ker(T - lI , where I is the identity endomorphism.

If x is an eigenvector of T , then the vectors kx , k I K are also eigenvectors.

3.2 Theorem.

If V is a K-vector space and T I End(V), then

1) for any eigenvector of T corresponds a single eigenvalue l I s T

Eigenvectors which correspond to single eigenvalueas are linearly independent.

3) The set Sl V is an invariant vector subspace with regard to, T , hence T(Sl) Sl . The vector subspace is named the eigensubspace corresponding to the eigenvalue l I s T

Proof. 1) Let us consider x 0 an eigenvector corresponding to the eigenvalue l I K. Suppose there is another eigenvalue l I K corresponding to the same eigenvector such that T (x) = l x then it would result lx l x l l )x = 0 l l

2) Let x1, x2, , xp be the eigenvectors corresponding to the distinct eigenvalues l l lp. We show using induction (p) the linear independence of the considered vectors. For p = 1 and x1 0, as an eigenvector, the set is linearly independent. Supposing that the property is true for p-1, we show that it is true for p eigenvectors. Applying the endomorphism T to the relation k1x1 + k2x2 + + kpxp = 0 we obtain k1l x + k2l x + + kplpxp = 0. By substracting the first relation amplified by lp, we obtain the following from the second relation:

k l lp)x1 + + kp-1(lp lp )xp-1 = 0

The inductive hypothesis prooves k1 = k1 = = kp-1 = 0, and if we use this in the relation k1x1 + + kp-1xp-1 + kpxp = 0 we obtain kpxp = 0 kp = 0 , that is x1, x2, , xp are linearly independent.

3) For any x, y I Sl and a b I K we have :

T ax by aT (x) + bT (y) = alx bly l ax + by), which means that Sl is a vector subspace of V.

For x I Sl , then T (x) = lx I Sl , followingly T (Sl) Sl

3.3 Theorem.

The eigensubspaces , corresponding to distinct eigenvalues l l have only the null vector in common.

Proof. Let l l I s T l l . Suppose there exists a vector x I , different from 0, for which we can write the relations T (x)= l x si T (x)= l x. We obtain (l l )x = 0 l l which is a contradiction. Therefore = .

3.4 Definition.

The non-zero matrix XI Mn(K) is called eigenvector of the matrix A if there l I K such that AX = lX. The scalar l I K is called eigenvalue of the matrix A.

The matrix equation AX = lX can be written under the form (A - lI )X = 0 and it is equivalent to the sistem of linear and homogenous equations:

(3.2)

which admits solutions different than the trivial solution if

P l) = det(A - lI ) = (3.3)

3.5 Definition.

The polinom P l) = det(A - lI is called the caracteristic polinom of the matrix A and the ecuation P(l is called the caracteristic equation of the matrix A .

If can be proven that the caracteristic polinom can be also written like:

P l) = (-1)n [ln d ln-1 + + (-1)ndn (3.4)

where di is the sum of the principal minors of order i belonging to the matrix A.

Remarks

The solutions of the caracteristic equations det(A - lI ) = 0 are the eigenvalues of the matrix A.

If the field K is a closed field, then all roots of the caracteristic equations are in the field K therefore the corresponding eigenvectors are also in the K-vector space Mn (K).

If K is open, i.e. K = R, the caracteristic equation may have also complex roots and the corresponding eigenvectors will be included in the complexified real vector space.

For any real and symmetrical matrix, it can be proven that the eigenvalues are real.

3 Two similar matrices have the same caracteristic polinom.

Truly, if A and A are similar, A = C-1AC with C nedegenerated, then

P l) = det(A lI ) = det(C-1AC - lI ) = det[C-1(A - lI)C] =

= det(C-1) det(A - lI) detC= det(A - lI) = P(l

If A I Mn(K) and P(x) = a0xn + a1xn-1 + + an I K[X] then the polinom P(A) = a0An + a1An-1 + + anI is named matrix polinom.

3.6 Theorem.

(Hamilton Cayley)

If P l) is the caracteristic polinom of matrix A, then P(A) = 0.

Proof. Let us consider P(l) = det(A - lI) = a0ln + a1ln + + an.

Likewise the reciproc of the matrix A - lI is given by

(A - lI)* = Bn-1ln + Bn-2ln + + B1l + B0, Bi I Mn(K)

and satisfies the relation (A - lI (A - lI)* = P(l I , hence

(A - lI) (Bn-1ln Bn ln + B1l + B0) = (a0ln a ln + an)I,

By identrifying the polinoms trough l , we obtain

a I = Bn-1

a I = A Bn-1 Bn-2

a I = A Bn-2 Bn-3

An

An

An

an I = A B1 B0

anI = A B0

A

a An + a An + + a0I = 0 , q.e.d.

3.7 Consequence

Any polinom A I Mn(K) of rank to n can be wirtten like a polinom of rank n 1.

3.8 Consequence.

The inverted matrix A can be expressed by raising the matrix A to different powers, inferior to its order.

Now let us consider an n-dimensional K-vector space Vn , and a base B and lets not with A I Mn(K), the associated matrix to the endomorphism T with regard to this base. The equation T x lx is equivalent to (A - lI )X = 0.

The eigenvalues of the endomorphism T, if they exist, are the roots of the polinom P(l)in the field K, and the eigenvectors of T will be the solutions to the matriceal equations (A - lI )X = 0. Because of the invariance of the caracteristic polinom regarding a base change from Vn , P(l) depends only on the endomophism T and does not depend on the matrix representation of T in a given base. Therefore, the naming of the caracteristic polinom / caracteristic equation of the endomorphism T are well justified, respective to the caracteristic polinom P(l) of the matrix A, and for the caracteristic equation (A - lI )X = 0 of the matrix A.

4. An Endomorphisms Canonical Form.

Let us consider the endomorphism T Vn Vn defined on the n-dimensional K-vector space Vn.

If we consider the bases B si B belonging to vector space Vn and we denote the associated matrices of the endomorphism T with respect to these bases with A and A , then we have A W AW, where W represents the passing matrix from base B to base B . Knowing that the associated matrix to an endomorphism depends on the chosen base from the vector space Vn, we will determine that particular base which with regard to the endomorphism has the simplest form, that means that the associated matrix will have a diagonal form.

4.1 Definition.

The endomorphism T: Vn Vn is diagonalizable if there exists a base B = belonging to vector space Vn astfel incat matricea corespunzatoare lui T in aceasta baza sa aiba forma diagonala.

4.2 Theorem.

The endomorphism T: Vn Vn is diagonalizable if and only if there exists a base of the vector space Vn formed only of the eigenvectors corresponding to the endomorphism T.

Proof: If T is diagonalizable then there exists a base B = with respect to whom the associated matrix A = (aij) has diagonal form, meaning aij = 0, i j. The actoion of T on the elements on base B is given by the relations T (ei) = aiiei , i , followingly ei , i = are eigenvectors of T .

Reciproc. Let a base in Vn , formed only of eigenvectors, that is T vi = livi , i =

From these equations we can construct the associated matrix corresponding to T in the current base:

D = .

where the scalars li I K are not necessarily distinct.

In the context of the previous theorem, the matrices from the similarity classes which correspond to the diagonalizable endomorphism T , are called diagonalizable for the different bases referring to the vector space Vn.

4.3 Consequence.

If the endomorphism T has n distinct eigenvalues, then the corresponding eigenvectors determine a base in Vn and the associated matrix to T in this base is a diagonal matrix having on the main diagonal the eigenvalues of T

4.4 Consequence.

If  A I Mn(K) is diagonalizable then detA = l l ln



An eigenvalue l I K , as the root of the caracteristic equation P(l) = 0, has a multiplicity order which is named algebric multiplicity, and the dimension of the eigensubspace to dimSl is named the geometric multiplicity of the eigenvalue l

4.5 Theorem.

The dimension of an eigensubspace pf the endomorphism T is at most equal with the multiplicity order of the respective eigenvalue (the geometric multiplicity is at most equal to the algebric one.).

Proof: Let the eigenvalue l I K have the algebric multiplicity m n , and the corresponding eigensubpace have the following dimension : dim = p n , and by considering a base B = in we will obtain the following:

If p = n, then we have n linear independent eigenvalues, therefore a base in Vn reported to whom the associated matrix of the endomorphism T has diagonal form, and its main diagonal having the eigenvalue l . In this case, the caracteristic polinom is written in the following form : P(l) = (-1)n (l l )n , resulting p = n.

If p < n , we upgrade the base of the vector subspace to a base = belonging to Vn.

The action of the operator T on the elements of this base is given by :

T (ei) = l ei i = and

T (ej) = , j =

With regard to the base , the endomorphism T has the following matrix:

=

hence the caracteristic polinom T has the form P(l l l)p Q(l) , meaning that (l l)p devides P(l) , then p m, q.e.d.

4.6 Theorem.

The endomorphism T : Vn Vn is diagonalizable iff the caracteristic polinom has all solutions in the field K and the dimension of every eigensubspace is equal to the multiplicity order of the corresponding eigenvalue.

Proof. If T is diagonalizable, then there is a base B Vn , formed only by eigenvectors, with regard to whom the associated matrix has the diagonal form. In these conditions the caracteristic polinom is written in the following form:

P l) = (-1)n

with li I K , the eigenvalues of T, with the multiplicity orders mi , . Without constraining the generality, we can admit that the first mi vectors from the base B = are the eigenvectors corresponding to the eigenvalue l , the next m2 corresponds to l etc. Hence , therefore, m1 dim . But dim m1 (theorem 3.12) and we get dim = m1 , and likewise for the other values.

Reciproc. If li I K and dim = mi, with , then we can consider the set B = , with the convention that the first m1 vectors form a base in , the following m2 vectors form a base in , and so on.

Because = and , base B is a base in Vn , with regard to whom the associated matrix of T has the following form:

A =

A being a diagonal matrix, we conclude that the endomorphism T este diagonalizable.

4.7 Consequence.

If T Vn Vn is a diagonalizable endomorphism, then the vector space Vn can be reprezented as the sum

Vn

Practically, the required steps for diagonalizing and endomorphis T are the following:

1 We wirte the matrix A , associated to the endomorphism T with regard to a given base of the vector space Vn.

2 We solve the caracteristic equation det(A - lI ) = 0, determing the eigenvalues l l lp with the corresponding multiplicity orders m1, m2, , mp.

3 We use the result in theorem 3.13 and we have the following cases:

I) If li I K i = we determine the dimensions of the subspaces . The dimension of the eigensubspace , representing the vector space of the solutions of the homogenous system (A - liI )X = 0, is given by dim = n - rank(A - liI ). The dimension of the subspace can be found by identifying the subspace itself.

a) If dim = mi i = , then T is diagonalizable The associated matrix to T , with regard to the base formed of eigenvectors is a diagonal matrix having on the main diagonal the eigenvalues written as many times its multiplicity order would allow it.

We can check this result by constructing the matrix T = , having as columns the coordonates of the eigenvectors (the diagonalizing matrix) and represents the passing matrix from the base considered initially to the base formed of eigenvectors, where the associated matrix of T with regard to the latter base is the diagonal matrix D, given by

D = T-1 AT = .

b) if li I K such that dim < mi , then T is not diagonalizable. The following paragraph will analyze this case.

II) If li K, then T is not diagonalizable. The problem of T s diagonalization can be considered only if the K-vector space V is considered an extension over the field K

Let us consider Vn a K-vector space and T I End(Vn) an endomorphism defined on Vn.

If A I Mn(K) is the associated matrix of T with regard to a base in Vn , then A can be diagonalized only if the conditions of theorem 3.13 are fulfilled..

If the eigenvalues corresponding to T belong to field K, li I K , and the geometric multiplicity is different then the algebric multiplicity dim < mi , at least for some eigenvalue li I K , the endomorphism T is not diagonalizable, but we can still determine a base in the vector space Vn with respect to which the endomorphism T has a more general canonical form, named Jordan form.

For l I K, the matrices of the form :

l), , , , (3.5)

are named Jordan cells attached to the scalar l, of the order 1, 2, 3, ,n .

4.7 Definition

The endomorphism T: Vn Vn is called jordanizable if there exists a base in the vector space Vn with regard to whom the associated matrix has the form:

J = ,

where Ji , i = , are Jordan cells of different orders attached to the eigenvalues li

A Jordan cell of order p attached to the eigenvalue l I K , is a multiple of the order m p , and it corresponds to the linearly independent vectors e1, e2, , ep which satisfy the following relations:

T(e1) = l e1

T(e2) = l e2 + e1

T(e3) = l e3 + e2

. . . . . . . . . . . . . . . . . . . .

T(ep) = l ep + ep-1

The vector e1 is an eigenvector and the vectors e , e3, , ep are called main vectors.

Remarks

The diagonal form of a diagonalizable endomorphism is a particular case of a the Jordan canonical form, having all Jordan cells of the first order

The jordan canonical form is not unique. The order in the main diagonal of the Jordan cells depends on the chosen order of the main vectors and the eigenvectors with regard to the given base.

The number of the Jordan cells, equal to the number of the linearly independent eigenvectors, and also their orders are unique.

The following theorem can be prooven :

4.8 Theorem.

(Jordan) If the endomorphism T I End(Vn) has all eigenvalues belonging to field K, then there exists a base in the vector space Vn , with regard to whom the associated matrix T has Jordan form.

Practically, for determining the canonical Jordan form of an endomorphism, we will consider the following steps:

We write the matrix A ,associated to the endomorphism T with regard to the given base.

We solve the caracteristic equation det(A - lI ) = 0 having the eigenvalues l l lp with their multiplicity order m1, m2, , mp determined.

We find the eigensubspaces , for each eigenvalue li

We calculate the number of Jordan cells, separately for each eigenvalue li , given by dim = n rang(A - lI . That would mean, that for each eigenvalue the number of the linearly independent vectors gives us the number of the corresponding Jordan cells.

The corresponding main vectors of those eigenvalues are determined for which dim < mi , their number is given by mi - dim . If v I is some eigenvector from , we will check the compatibility conditions and we will solve the linear equation systems one by one.

(A - liI )X1 = v, , (A - liI )Xp = Xs

Keeping in mind the compatibility conditions and the general form of the eigenvectors and those of the main vectors we will determine by giving arbitrary values to the parameters, the linearly independent eigenvectors of and the main vectors associated to each of them.

We write the base of the vector space Vn, uniting the systems of linear independent vectors mi , i = , formed by eigen- and main vectors.

By using the matrix T having as columns the coordonates of the vectors belonging to the base, which were constructed using the associated eigenvectors and main vectors in this order, we obtain the matrix J = T-1AT , the Jordan canonical form which contains on the main diagonal the Jordan cells, in the order in which they appear in the constructed base of associated eigen- and main vectors (if they exist). The Jordan cells have the order equal to the number of vectors from the system constructed out of the associated eigen- and main vectors.

5. Linear transformations on euclidean vector spaces.

The properties of the linear transformations defined on an arbitrary vector space, are applicable on the euclidian vector spaces as well. The scalar product that defines the euclidean structure permits us the introduction of particular linear transformation classes.

5.1. Orthogonal transformations

Let V and W two euclidean R-vector spaces. Without being in danger of confusion we can denote the scalar products on the two vector spaces with the same symbol < , > .

5.1 Definition

A lienar transformation T: V W is named linear orthogonal transformation if it keeps the scalar product, meaning 

< T x, T y> = < x, y > (5.1)

Examples.

The identical transformation T: V V, T (x) = x , is an orthogonal transformation.

2 The transformation T : V V, that associates to a vector x I V its inverse T (x) = - x , is an orthogonal transformation.

5.2 Theorem.

The linear transformation T : V W is orthohonal iff it keeps the norm, meaning

T x|| = || x || , x I V (5.2)

Proof If T is orthogonal then < T x, T y> = < x, y >, which for x=y becomes

< T x, T y> = < x, x> || T x||2 = ||x||2 || T x|| = ||x||.

Reciproc. Using the relation <a, b> = , we have

< T x T y > = [ || T x + T y ||2 - || T x - T y ||2 ] =

= =

= = <x, y> , q.e.d.

5.3 Consequence

The orthogonal transformation T : V W is a linear injective transformation.

Proof. If T is an orthogonal transformation then || T x|| = || x || and in the hypothesis T x = 0 results || x || = 0 x = 0 . Followingly, the kernel Ker T = , meaning T is injective.

Using the consequence 4.3 we find that by using an orthogonal transformation on a linearly independent system we get also a linearly independent system.

5.4 Consequence

The orthogonal transformation T: V V , keeps the euclidean distance and has as the fixed point the origin T (0) = 0.

Truly, d(T x, T y T x - T y || = || T (x - y)|| = ||x - y|| = d(x, y).

Any orthogonal transformation T, different then the identical transformation is injective, T(0) = 0 is the only fixed point. Conversely, the transformation T is the same with the identical transformation.

If W = V , T 1 and T 2 are two orthogonal transformations on V then their composition is also an orthogonal transformation.

If T: V V is also surjective, then T is invertable moreover, T is an orthogonal transformation.

In these conditions the set of all bijective orthogonal transformation of the euclidian vector space V forms a group together with the composing operation (product) of two orthogonal transformations, named the the orthogonal group of the euclidian vector space V denoted GO(V) and which represents a subgroup of the linear group GL(V).

Let us consider in the finitely dimensional euclidian vector spaces Vn and Vm, the orthonormed bases B = , respectively = and the orthogonal linear transformation T : Vn Wm

With regard to the orthonormed bases B and the linear transformation is caracterized by the matrix A I Mm n(R), with the following relations :

(T ej) = , i, j =

The bases B and are orthonormed, meaning that

< e , e2 > = ij , i, j = si < fk, fh> = kh , k, h =

Lets evaluate the scalar product of the images of the vectors of B

< T (ei), T (ej)> = = =

= =

By using T s orthogonal transformation property < T (ei), T (ej)> = ij we have

, i, j = (5.3)

The relation (5.3) can also be written under the form

tA A = In (5.3)

Reciproc. If T is a linear transformation , caracterized (with regard to the orthonormed bases B and ), by the matrix A I Mm n(R) which satisfied the conditions (4.3) , then T is an orthogonal transformation.

Truly, let x = (x1, x2, , xn) and y = (y1, y2, , yn) two vectors in the space Vn. If we calculate the scalar product of the images of these vectors

< T (x), T (y)> = < T , T > =

< T (ei), T (ej)> =

= = = = <x, y>,

meaning < T (x), T (y)> = < x, y> is an orthogonal transformation.

We have also proven the following theorem:

5.5 Theorem.

With regard to the orthonormed bases B Vn and Wm, the linear transformation T: Vn Wm is orthogonal iff the associated matrix satisfies the condition tA A = In

5.6 Consequence

The orthogonal transformation T Vn Wm is caracterized with regard to an orthonormed base B Vn by an orthogonal matrix A = tA

By taking into consideration that det tA = detA, results that if A is an orthogonal matrix, then detA =

The subset of the orthogonal matrices which have the property detA = 1 form a subgroup denoted with SO(n; R) , of the orthogonal matrix group of the n-th order name the special orthogonal group. An orthogonal transformation caracterized by the matrix A I SO(n; R) is also called rotation.

Remarks

Since an orthogonal transformation is injective, and if B Vn, is a base, then its image trough the orthogonal transformation T: Vn Wm , is a base in Im T Wm. Consequently n m.

Any orthogonal transformation between two euclidian vector spaces of the same dimension is an isomorhpism of euclidian vector spaces.

3 The associated matrix of two orthogonal transformations T , T 2 : Vn Vn with regard to an orthonormed base, is given by the product of the associated matrices correspondin to the transformations T and T . Thus, the orthogonal group of the euclidian vector space Vn, group GO(Vm), is isomorph with the multiplicative group GO(n; R), of the orthogonal matrices of the n-th order.

5.2 Symmetrical linear transformations

Let V and W two euclidian R-vector spaces and T: V W a linear transformation.

5.7 Definition

The linear orthogonal transformation tT : W V defined by the relation

< tTy, x>V = <y, T x>W , x I V, y I W (5.4)

is called the transpose of the linear transformation

5.8 Definition

A linear transformation T : V V is called symmetrical (antisymmetrical) if tT = T ( tT = - T )

Let Vn a finitely dimensional euclidian R-vector space and an orthonormed base B Vn . If the endomorphism T : Vn Vn is caracterized by the real matrix A I Mn(R), we can easily proove that to any symmetrical (antisymmetrical) endomorphism corresponds a symmetrical (antisymmetrical) matrix, with regard to an orthonormed base.

5.9 Theorem

The eigenvalues of a symmetrical linear transformation T: Vn Vn , are real.

Proof. Let l an arbitraty root of the caracteristic equation det(A-lI ) = 0 and X = t(x1, x2, , xn) and eigenvector that corresponds to the eigenvalue l

Denoting with the conjugated matrix of X and multiplying with t the equation (A-l I )X = 0, we obtain t AX = l t X .

Since A is symmetrical, the first member of the equality is real, because = = tX A = t( tX A ) = t A X.

Also, tXX 0 is a real number and l is the quotient of two real numbers, hence it is real.

Let the eigenvector v1 I R correspond to the eigenvalue l I R , and S1 the subspace generalted by v1 and Vn-1 Vn , its orthogonal complement, Vn = S1 Vn-1.

5.10 Theorem.

The subspace Vn Vn is invariant to the symmetrical linear transformation T: Vn Vn.

Proof. If v I Vn is an eigenvector corresponding the eigenvalue l , then T v l v . For v I Vn-1 we have satisfied the relation < v1, v > = 0. Let us proove that T (v) I Vn-1.

< v1, T v > = < T v , v > = < l, v1, v > = l < v1, v > = 0.

Based on this result the next theorem can be easily prooven.

5.11 Theorem.

The geometric multiplicity of an eigenvalue corresponding to the endomorphism T: Vn Vn is equal to the algebric multiplicity .

5.12 Proposition

The vector subspaces of an endomorphism T: Vn Vn , corresponding to distinct eigenvalues are orthogonal.

Proof. Let , , the corresponding eigensubspaces to the distinct eigenvalues li and lj which have the algebric multipolicity orders mi and mj respectively. For v I and w I , T (v) = liv T (w) = ljw we obtain :

< w, T (v) > = li < w, v> si < T (w), v> = lj < w, v> . Since T is symmetrical we have (lj li) < w, v> = < T (w), v> - < w, T (v)> = 0 .

Thus lj li T < w, v > = 0 . q.e.d.

5.13 Proposition

Any symmetrical linear transformation of an euclidian vector space Vn determines an orthonormed base.

Truly, the first mi vectors belong to the eigensubspace and the corresponding caracteristic roots li of multiplicity mi and so on until all eigenvalues are used up m1 + m2 + + mp = n.

With regard to the orthonormed base of these eigenvectors the endomorphism T:Vn Vn has canonical form. The matrix associated to the endomorphism T is this base is diagonal, having on the main diagonal eigenvalues, written as may times as their multiplicirty order allows and are expressed in function of the associated matrix A corresponding to the endomorphism T in the orthonormed base B , trought the relation D = tW AW , where W is the othogonal passing matrix form base B to the base formed out of eigenvectors.

5.3 Isometrical Transformations on punctual euclidian spaces.

Let E=(E,V j) a punctual euclidian space, E being the support set, V the director vector space and j the affine structure function.

5.14 Definition

A bijective correspondance f : E E is called a transformation of the set E of a permutation of the set E.

If E is endowed with a certain geometrical structure, and f satisfies certain conditions referring to this structure, then f will be named geometrical transformation.

By denoting (sE ) the transformation group of the set E with regard to the composing operation of the functions, and G a subgroup of his, then the pair (E, G ) will be named the geometric space or the space iwth a fundamental group.

A subset of points F E is named a figure of the geometric space (E, G ). Two figures F1,F2 E are called congruent if fI G, such that f(F1) = F2.

A property or scale referring to the figures of the geometrical space (E, G ) is called geometric if this is invariant to the transforamtions of the group G .

We call the geometry of the space (E, G ) the theory obtained by the study of the notions, propeties and geometric scales of geometrice referring to the figures of the set E .

If E = (E,V j) and E = (E,V j ) are two affine spaces, then an affine transformation t : E E is uniquely determined by the pair of points AIE, and A IE respectively, and by a linear transformation T : V V

We shall consider the euclidian punctual space of the free vectors E3 = (E3, V3 , j) and the carthesian reference point R = ( O, ) in this space.

An affine transformation t : E3 E3 , f(M) = M realizes the corespondance M(x1,x2,x3) M(x1,x2,x3) caracterized by the relations

xi = xj + , det.(aij) (5,5)

and written in matrix form

X = A X + , det.A (5,5)

An affine transformation can be also interpreted as a change of affine reference points. The set of the affine transformations form a group, with regard to that composung operation of the application named the affinity groups.

A very special interest exists for those transformations of the space that do not deform the figures when studying the properties of the geometrical spaces. For that matter we will present some examples of linear affine transformations having the above mentioned property.

Examples:

1. The application so : E3 E3 defined trough so(O) = O, OIE3, fixed and t(P) = P , with the property , is an affine transformation called the center symmetry O . The associated linear transformation T : V V is defined by the relation T( ) = - .

2. If d E3 is a staright line and P a point that is not contained in the line d, then exists one and only one point PI E3 with the property PP d and the middle point of the segment PP is on the line d .

The application sd : E3 E3 , sd(P) = P , with P defined above, is called axial symmetry If P0 is the orthogonal projection of the point P on the line d, then there exists the affine combination P= 2P0 P . The associated linear transformation T : V V is given by the relation T ( ) = 2 .

3. The application t : E3 E3 is given by the corespondence t(P) = P with the property , I V being a given vector, and it is an affine transformation on E3 named the vector tranlation .

The associated linear transformation T : V V is given by the identical application T ( ) = .

Let E2 = (E2, V2 , j) be a bidimensional puctual euclidian space.

The application r a : E2 E3 , r a (P) = P , with the following properties:

d (O,P)=d (O,P) and POP = a is called center rotation O and the angle a

The associated linear transformation T : V V , T ( ) = is caracterized by an orthogonal matrix.

In the euclidian spaces geometry we are foremost interested in those geometrical transformations which conserve certain properties of the figures in the considered space. With other words, we will consider certain subgroups of the euclidian space affinity group which govern these transformations

5.15 Definition

We call isometry on the euclidian punctual space

E (E3,V j) that application f : E3 E which has the property:

d ( f(A),f(B) ) = d (A,B) A,B I E3 (5,6)

If we consider the representant of the vector in the point AI E3 , then ( 1) BI E , so that and, beside that we have d (A,B) = = . Therefore, the relation (5,6), for the linear transformation T : V V, associated to f, is equivalent with

d ( ) = d ( ) , (5,6)

If the punctial space E3, is the space of the euclidian geometry, then an isometry f : E3 E is a bijective application , hence a geometrical transformation named isometric transformation.

The subset of the isometrical transformations, with regard to the composing operation of the appliactions forma a subgroup Izo E, called the group of isometries.

5.16 Theorem.

The affine application f : E E is an isometric transformtion iff the associated linear application

T: V V

is an orthogonal transformation.

The geometrical transformations given in the previous examples (central symmetry, axial symmetry, translation and rotation) are isometrical transformations.

5.17 Theorem.

Any isometry f : E E is the product of a translation with an isometry with a fixed point x , f = t g .

Let us consider the euclidian plane PE = ( E2 , IzoE2 ), and R = ( O, ) as a carthesian and orthonormed reference point in the euclidian puntual space E2 = (E2, V j) and

g : E2 E2 an isometry with O as a fixed point. Consider the point A(1,0) and some arbitraty point M(x,y) having the image A(a,b) and M(x,y) respectively.

With the conditions : d (O,A) = d (O,A) , d (O,M) = d (O,M) and respectively d (A,M) = d (A,M) we obtain the equation systems

(5.7)

The general solution is:

(5.8)

Reciprocally, the formulae (5.8) represent an isometry. Truly, if M1 and M2 are two arbitrary points and their images are M1 and M2 respectively, then

d (M1,M2)2 = (x2 - x1)2 +(y2 - y1)2 =

= (a2+b2) (x2-x1)2 +(y2-y1)2 d (M1,M2)2 ,

hence , d (M1,M2) = d (M1,M2) .

The fixed points of the caracterized by the equations (5.8) are obtained in (5.8) x = x and y = y , hence

(5.9)

The system (5.9) is a system of linear equations and homogenous, and has the determinant

D e +1) (1-a) .

If e = -1 the system (5.9) admits an infinity of fixed points, hence the equations (5.8) represemt an axial symmetry.

If e = 1 and a 1 the isometry (5.8) admits only the orifin as a fixed point and represents a rotation, and for e = 1 and a = 1 the application (5.8) becomes an identical transformation.

The equations (5.8) can be written under the matrix form as follows:

(5.8)

The associated matrix A = , in the conditions a2=b2=1,e= 1, is an orthogonal matrix, which means that the isometry subgroups with the origin as a fixed point defined on the euclidian plane is isomorph with the orthogonal group GO(2;R) 

For e = 1 (considering the identical application as having the rotation of angle a = 0) the orthogonal matrix A has the property det.A=1 , which means that the subgroup of the rotations of the euclidian plane is isomorph with the orthogonal special subgroup SO(2;R).

By combining the isometries, having the origin as a fixed point , together with the translations which transform the origin into the point (xo,yo) we obtain the isometries of the euclidian plane PE = ( E2 , IzoE2 ) ,caracterized analytically by the equations:

(5.10)

Using the trigonometrical functions, and considering a = cosj and b = sinj jIR

the equations (5,10) are written in the following form:

(5.11)

For e=1 , the transformations caracterized by the equations

(5.12)

meaning those isometries represented by the composing of a rotation and a translation ( movements of the plane ), form a subgroup called the subroup of movements.

In the tridimensional punctual euclidian punctual space E (E3,V j

The isometries T : E3 E3 , T(x1,x2,x3) = (y1,y2,y3) , are actually the transformations caracterized by the equations

(5.13)

where the matrix A = (aij) , i,j = is an orthogonal matrix, and the coordinate points (b1,b2,b3) is the translation of the origin of the orthonormed carthesian reference point R (O; ) from E

Problems

1. Find out which of the following applications are linear transformations?

T: R2 R2 , T(x1,x2) = (x1-x2,x1+x2)

T: R2 R2, T(x1,x2) = (x1cosa - x2sina , x1sina + x2cosa aIR

T: R3 R3, T(x1,x2,x3) = (x1-2x2+3x3, 2x3 , x1+x2 )

T: R2 R3, T(x1 x2) = (x1-x2,x1+x2,0)

T: R2 R3, T(x1 x2) = (x1x2 ,x1,x2) .

2. Proove that the following transformations are linear:

a)      T V R , T( ) =

b)      T C(n)[0,1] C( o[0,1] , Tf = , ai I R

c)      T: C[0,1] C( [0,1] , (Tf)(x) =

3. Determine the linear transformations T: R3 R3 ,T(vi) = wi , where

v1 = (1,1,0), v2 = (1,0,1), v3 = (0,1,1) and w1 = (2,1,0),w2=(-1,0,1), w3=(1,1,1).

Determine those transformation for which T 3 = T .

4. Sho that the transformation T: R2 R3, T(x1 x2) = (x1-x,x1+x2,2x1+3x2) is injective, but not surjective.

5. Determine KerT and ImT for the transformation T: R3 R4

T(x1,x2,x3) = ( -x1 + x2+x3,x1-x2+x3,2x3,x1-x2) and verify the relation dim KerT + dim Im T = 3 .

6. Consider the transformation T: R3 R3 , given in the canonical base by the matrix

A =

a)      Determine the vector subspace T-1(W) for the subspace

W = .

b)      Determine a base for each of the subspaces Ker T and Im T .

  1. Show that the application T : V V T( , V

Fixed, is linear . Determine Ker T , Im T and verify the rank theorem.

  1. On the space R3 we consider the projections Ti on the three coordinate axes.

Show that R3 = si , ij=1,2,3

  1. Show that the endomorphism T : R3 R given by the matrix

A= is nilpotent of two index (T2 = 0 ).

A transformation with this property is called tangent structure.

10. Show that the transformation T : R2n R2n defined by the relation

T(x) =(xn+1,xn+2,,x2n,-x1,-x2,,xn) has the property T2 = - Id.R2n

A transformation with this propery is called complex structure.

11. Determine the linear transformation which transformes the point (x1,x2,x3) I R3 in its symmetric to the plane x1 +x2 + x3 = 0 and show that the newly determined transformation is orthogonal.

11. Determine the eigenvectors and the eigenvalues for the endomorphism T: R3 R caracterized by the matrix

A= , A= , A= , A= .

12. Study the possibility of reducing to the canonical form, if yes, find the diagonalizing matrix for the endomorphisms with the following associated matrices:

A= , A= , A= , A= .

Determine the Jordan canonical form for the following matrices:

A = , A = , A = .

Determine an orthonormed base in the R3 space where the endomorphism T : R3 R , T(x) = (-x1 +x2-4x3, 2x1-4x2-2x3, -4x1-2x2-x3) admits the canonical form.

Using the Hamilton-Cayley theorem, calculate A-1 and P(A),

P(x) = x4 + x3 +x2 +x + 1, for the following matrices:

A = , A= , A= ,A = .




Politica de confidentialitate | Termeni si conditii de utilizare



DISTRIBUIE DOCUMENTUL

Comentarii


Vizualizari: 938
Importanta: rank

Comenteaza documentul:

Te rugam sa te autentifici sau sa iti faci cont pentru a putea comenta

Creaza cont nou

Termeni si conditii de utilizare | Contact
© SCRIGROUP 2024 . All rights reserved