CATEGORII DOCUMENTE 
loading...

Bulgara  Ceha slovaca  Croata  Engleza  Estona  Finlandeza  Franceza 
Germana  Italiana  Letona  Lituaniana  Maghiara  Olandeza  Poloneza 
Sarba  Slovena  Spaniola  Suedeza  Turca  Ucraineana 
§1. Definition and examples
The notion of vector space constitues the study of linear algebra and represents one of the most important algebraic structures used in different branches of mathematics, as in the applicated disciplines.


1.1 Definition 
A nonempty set it is called vector space (linear) over field K (Kvector space) if the following conditions are satisfied : 
(V, +) forms an abelian group structure, namely 
a) (x+y)+z = x+(y+z) , x, y, z I V
b) _{ } such that _{ }, x + 0 = 0 + x
c) _{ }, _{ }, x + (x) = (x) + x = 0
d) _{ }, _{ }
II. The law of external composition K _{
}
a) a (x + y) = ax + ay
b) (a + b) x = ax + bx
c) a bx ab) x
d) 1 x = x, a b I K x, y I V
Conditions I and II represent the vector space axioms over field K.
The elements of vector V are called vectors, the elements of field K are called scalars and the external law of composition is called scalars multiplication.
If the commutative corp K is the corp of real numbers R or complex C, we will talk about a real space vector, respectively complex vector space.
Most of the times we will have vector spaces over the real number corp and will call then simple “vector spaces', and in the other cases will indicate the scalar field.
If we note with 0_{V} the null vector of the additive group V and with 0_{K} the null scalar, then from the axioms that define the vector space V over field K we have the following properties:
1.2 Corollary 
If V is a vector space
over field K, then for, _{
} 0_{K} x = 0_{V} a 0_{V} = 0_{V} x= x . 
Demonstration: 1) Using axioms II_{b} and II_{d} we have 0_{K} x = (0_{K} + 0_{K})x = = 0_{K} x + 0_{K} x T 0_{K} x = 0_{V }.
2) Taking into acount I_{b} and II_{a}, a0_{V} = a(0_{V} + 0_{V}) = a0_{V} + a0_{V} from which we obtain _{ }.
3) from the additive group axioms of field K, the consequence 1) and axiom I_{c} we have x + (1)x = [1 + (1)]x = 0_{K}x = 0_{V} we obtain (1) x= x.
1° Let it be K a commutative corp. Considering the additive abelian structure of field K, then set K represents Kvector space. More, if K' K is a subcorp, then K is a K'vector space. The set of complex numbers C can be seen as a Cvector space or Rvector space, Q vector space.
2° The set K^{n} = K K … K, where K is a commutative corp, is K vector space, called arithmetic space (standard), according to the operations: x,y IV , a I K , x= (x_{1}, x_{2},..,x_{n}), y = (y_{1}, y_{2},..,y_{n})
_{
}
_{
}
3° The set matrices of M_{m}_{ n}(K), is a K vector space with respect to the operations: _{
}
_{
}
° The set K[X] of polynoms with coefficients from field K is a K vector space according with the operations:
_{
}
f = (a_{0}, a_{1},..), g = (b_{1}, b_{2},..) I K[X], a I K.
5° The set of solutions of a linear and homogenous equation system forms a vector space K of the coefficients of this system. The solutions of a system of m equations and n unknown variables, seen as elements from K^{n} (nuple), can be summed and multiplied with a scalar respecting the sum and product with scalars defined on K^{n}.
6° The set of free vectors V_{3} from the punctual space of elementary geometry is R vector space.
To build this
set we consider the geometric space E_{3} and set M = E_{3} E_{3} = . The elements of set M are called bipoint or oriented segments and
will be noted with_{
}
A 
C 
D 
B 
Fig.1
The length
(module or norm) of an oriented segment _{
}
On the set M we introduce the equipolent relation '~'.
B 
D 
C 
A 
It is easy two verify that the equipolent relation is an equivalence relation on set M (is reflexive, symmetric and transitive).
The set of equivalence classes, according to this relation:
M_{/~} = V_{ }
defines the set of free vectors
of the geometric space E_{3}.
The equivalence class of the oriented segment _{
}
Two free
vectors _{
} and _{
}
Two free vectors which have the same direction are called colinear vectors. Two collinear vectors with the same length and opposite sense are called oppose vectors.
Three free vectors are called coplanar if their oriented corresponding segments are parallel with a plane.
The set V_{3 } can be organized as an abelian additive group.
A 
B 
D 
C 
_{
} 
_{
} 
_{
} 
fig.3
The rule
which defines the sum of two free vectors _{
} and _{
}
The sum of
two free vectors “+”: V_{3
} V_{3 } V_{3}, _{
}
The external composition law
j : K V_{3 } V_{3}, _{
}
where the vector _{
}
In conclusion, the the free vectors set is a real vector space .
§ 2. Vector subspaces
Let it be V a vector space over field K.
2.1 Definition. 
A nonempty subset U V is called vector subspace of V if the algebraic operations from V induce on U a structure of Kvector space. 
2.2 Theorem. 
If U is a subset of Kvector space V, then the following affirmations are equivalent: 1° U is a vector subspace in V 2° x, yI U, aI K we have a) x + yI U_{ } b) axI U_{ } x, yI U a b I U T ax + by I U 
Demonstration
1° 2°:
if U V is a subspace then for _{
}
2° 3°: _{
}
3° 1°: _{ } and for a = 1, b = 1 results that x  y I U which proves that U V is an abelian subgroup. On the other side _{ }, _{ } and _{ } and the axioms II from the definition of the vector space verifies immediately, so the subset U V posses a structure of vector space.
Example
1° The set V is subspace in V, called nul subspace of V. Any subspace different from the vector space V and the nul subspace is called proper subspace.
2° The set of the symmetric matrices (antisymmetric) of n order is a subspace of the square matrices set of n order.
3° The polynomial set with real coefficients of degree n, R[X] = represents a vector subspace of the vector space of the polynoms with real coefficients.
° Subsets
R_{x} R^{ } R_{y} = R^{ }
are vector subspaces of the arithmetic space R^{2}. Generally, the set of the points from any line which passes through the origin of space R^{2}, determine a vector subspace. This vector subspaces represents the solution’s set of some linear and homogenous equations in two unknowns.
2.3Proposition. 
Let it be V_{1} and V_{2}
two subspaces in Kvector space V.
The subsets V_{1} V_{2}
V and V_{1} + V_{2} = =_{
} 
Demonstration. For x, y I V_{1} V_{2} T x, y I V_{1} and _{ } so V_{1} and V_{2} are vector subspaces of V results that for _{ } we have _{ } and _{ }, so ax + by I V_{1} V_{2}. Using Theorem 2.1 results the first part of the proposition.
If _{
}
Observation. The subspace V_{1} V_{2} V is not an vector space.
Example. The vector subspaces R_{x} and R_{y} are defined in exampe 4°, it verifies the relations:
R_{x} R_{y} = and R_{x} + R_{y} = R^{2}.
Indeed, if (x, y) I R_{x} R_{y} (x, y) I R_{x} and (x, y) I R_{y} y = 0 and x = 0, which proves that subspace R_{x} R_{y} is formed only from the nul vector.
For (x, y) I R^{2}_{ }, (x, 0) I R_{x} , (0, y) I R_{y} , such that (x, y) = (x, 0) + (0, y) which proves that R^{2} R_{x} + R_{y}. The reverse inclusion is obvious.
2.4 Proposition 
Let V_{1} ,_{ }V_{2} V be two vector subspaces and v I V_{1} + V_{2}. The decomposiotion _{ } is unique if and only if V_{1 } V_{2} = _{ }. 
Demonstration: The necessity of the condition is proved by reduction to absurd. Let’s suppose that V_{1} V_{2} T v 0 which can be written v = 0 +v or v = v+ 0, which contradicts the unicity of writing, so V_{1} V_{2} = .
To prove the condition’s sufficiency we admit that _{ }. Because _{ } and _{ }, vector _{ } is contained in V_{1} V_{2}. For V_{1} V_{2} = results that _{ } and _{ } .
If V_{1} and V_{2} are two vector subspaces of the vector space V and V_{1 } V_{2} = then the sum V_{1} + V_{2} is called direct sum and is noted with V_{1} V_{2}. Plus, if V_{1} V_{2} = V, then V_{1} and V_{2} are called supplementary subspaces. In case that V_{1} V is a given vector space and there exists an unique subspace V_{2} V such that V = V_{1} V_{2}, then V_{2} is called the algebraic complement of subspace V_{1}.
Example. Vector subspaces R_{x} and R_{y}, satisfying proprieties R_{x} R_{y} = , R_{x} + R_{y} = R^{2}, are supplementary vector subspaces, and the arithmetic space R^{2} can be represented under the form R^{2} = R_{x} R_{y}. This fact permits that any vector (x, y) I R^{2} can be written in an unique way as the sum of the vectors. (x, 0) I R^{2} and (0, y) I R^{2}, (x, y) = (x, 0) + (0, y).
Observation. The notions of sum and direct sum can be extended to a finite number of terms.
2.5 Definition. 
Let it be V a vector space over field K and S a nonzero subset of it. A vector v I V as _{
} is called linear finite combination of elements from S. 
2.6 Theorem. 
If S is anonzero subset of V, then the set of all linear finite combinations of elements from S, is noted L(S) or <S>, is a vectorial subspace of V, called generated subspace by set S or linear covering of S. 
Demonstration Applying the result of theorem 2.1 for
x, y I
L(S), a, b I K, _{
}
2.7 Consequence. 
If V_{1} and V_{2} are two vectorial subspaces of vector space V then L(V_{1 } V_{2})=V_{1 }+ V_{2}. 
2.8 Definition. 
A subset S V is called generators system for vector space V if the generated subspace by subset S coincides with V, L (S)=V. 
If S subset is finite, and for any vector v I
V,
l_{i} I K, _{
}
A generalization of the notion of vector space is given by the notion of linear variety.
2.9 Definition. 
Is called linear variety in vector space V a subset L V for which exists a vector x_{ } I L such that the set _{
} 
Subspace V_{L} is called the director subspace of linear variety L.
Let’s consider a line L which passes through point _{ }. The point _{ }, (a, b) I L is situated on a parallel line with L R^{ } which passes through the origin .
y 
x 
b 
bb_{0} 
V_{1} 
L 
x 
v 
a 
aa_{0} 
0 
a_{0} 
x_{0} 
b_{0} 
In finely, the subset of points from the vector space R^{2} situated on any line (L) from plane represents a linear variety having as director vector space the line which passes through origin and is parallel with line (L).
A vector subspace represents a particular case of linear variety; is that of linear variety of vector space V which contains the nul vector of the vector space V (v_{0} = 0).
Let it be V a Kvector space and subset S = V.
2.10 Definition. 
The vectors subset S = V is called linearly independent ( free or x_{1}, x_{2}, …, x_{n} are linear independent) if the equality _{ , }l_{i} I K, _{ }, happens only if _{ }. 
A set(finite or not) of vectors from a vector space is linear independent if any finite system of vectors is a system of linear independent vectors.
2.11 Definition. 
The vector subset S = V is called linearly dependent (tied or vectors x_{1}, x_{2},.., x_{n} are linear dependent), if ) l_{1}, l_{2}, …, l_{p} I K not all zero, such that _{ }. 
Remark: If the canceling of a linear finite combination, formed with vectors x_{1}, x_{2}, …, x_{n} I V, permits to express a vector in function of others (there exists at least one null coefficient ) then vectors x_{1}, x_{2}, …, x_{p} are linear dependent, contrary they are linear independents.
2.12 Theorem. 
If S = V is a linear independent set and L(S) The linear covering of S, then for any set of p + 1 elements from L(S) is linear dependent. 
Demonstration. Let it be vectors y_{i}_{ =
ij }x_{j}_{ }, i =
1,2,…, p + 1 _{
}
The relation l_{1}y_{1} + l_{2}y_{2} + …+l_{p+}_{1}y_{p+}_{1} = 0 is equivalent with _{
}
§3. Base and dimension
Let it be V a Kvector space
3.1 Definition. 
A subset B (finite or not) of vectors from V is called base of the vector space V if: 1) B is linear independent 2) B represents a system of generators for V. 
The vector space V is called finite generated or finite dimensionally if there exists a finite system of generators.
3.2 Theorem. 
If V is a finite generated vector space and S is a system of generators for V, then there exists a base B S of the vector space V. (From any finite system of generators of the vector space it can be extracted a base). 
Demonstration: First, prove that S contains also nonzero vectors. Suppose that S = , than_{ } can be written as x = l _{ } 0 = 0 (S – system of generators) contradiction, so S .
Let it be now x_{1} I S a nonzero vector. The set L = S represents a linear independent system. We continue adding nonnull vectors from S for which the subset L represents a linear independent set. Let’s suppose that S contains n elements, then S has 2^{n} finite subsets. If after a finite number of steps we will find L S, a system of linear independent vectors and for L’ S’ with L L’, L’ represents a linear dependent subset (L is maximum in the relation sense order).
L is a
generator system for V. Indeeed, if L = for m = n T L = S and is a system of generators, and if m < n, then _{
}, represents a system of linear
dependent vectors (L is maximal) and _{
}
3.3 Consequence. 
If V and S V a generator finite system and L_{1} S a linear independent system, then exists a base B of vector space V, such that L_{1} B S. 
A vector space V is finite dimensional if it has a finite base or if V = , contrary is called dimensional infinite.
Examples
1°In arithmetic space K^{n} the vector subset B=, where e_{1}=, e_{2}=,…, e_{n}=, represents a base of the vector space K^{n}, called canonic base.
2° In vector space of polynomswith real coefficients R[X] subset B = , constitutes a base. R[X] is an infinite dimensional space.
3.4 Proposition. 
In Kvector space V finite generaedt, any two bases have the same number of elements. 
Demonstration. Let’s consider in vector space V finite generated the bases B and B , having card B= n, respective card B = n . Using consequence 3.3 we obtain sequently n n and n n, so n = n.
The last proposition permits the introduction of the dimension notion of a vector space.
3.5 Definition. 
Is called dimension of a finite generated vector space, the number of vectors from a base of it noted with dimV. The null space has dimension 0. 
Observation If V is a vector space with dimension dimV = n then:
a) a system of n vectors is base is free independent.
b) a system of n vectors is base is a system of generators.
c) Any system of m > n vectors is linear dependent.
We will note K ndimensional vector space with V_{n}, dimV_{n} = n.
3.6 Proposition. 
If B = is a base of Kvector space V_{n} then any vector x I V_{n} admits an unique expressions _{ }. 
Demonstration We suppose that x
I
V_{n} will have another expression _{
}
The scalars l_{1}, l_{2},…, l_{n} are called the coordinates of vector x in base B, and the bijections _{ }, _{ } are called system of coordinates on V.
3.7 Theorem. 
(Steinitz–exchange theorem). If B = is a base in vector space V_{n} and S = is a system of linear independent vectors from V_{n} then p n and after an eventual remuneration of base vectors B the system B = represents also a base for V. 
Demonstration: Applying the result of consequence 3.3 and the fact that any two bases have the same cardinal results that p n.
For the second part of the theorem
we use the method of complete mathematical induction. For p = 1, f_{1} I V
we write in base B under the form _{
}
3.8 Consequence. 
(completing theorem) Any system of linear independent system from a vector space V_{n} can be completed up to a base in V_{n}. 
3.9 Consequence. 
Any subspace V’ of a finite generated vectorial space V_{n} admits at least one suplimentary subspace. 
3.10 Theorem. 
(Grassmann – dimension theorem). If V_{1} and V_{2} are two vectorial subspaces of Kvector space V_{n} then 
din (V_{ } + V_{ }) = dimV_{ } dimV_{ } – dim(V_{1} V_{ } (3.1)
Demonstration: Let it be a base of subspace (V_{1} V_{2}) V_{1}.
From the consequence 3.8 we can complete this system of linear independent vectors at a base in V_{1 }, this is given by set B_{ } . In a similar way we consider in vector space V_{2}, base B_{ } . It is easy to demonstrate that the subset B , is a system of generators for V_{1}+ V_{2}. The subset B is linear independent. Indeed,
_{
}
which means
that vector _{
}
Using this result in the first relation and with respect to the fact that B_{1} is a base in V_{1} resultsa_{1} = a_{2} = … = a_{r} = b _{r+1} = b_{ r+2} = … = 0, so B is linear independent, is a base in V_{1}+V_{2}.
In these conditions we can write dim (V_{1}+V_{2}) = r + s + p = = (r +s) + (r + p) – r = dimV_{1} + dimV_{2}  dim(V_{1} V_{2}). Q.e.d.
3.11 Consequence. 
If the vector space V_{n} is represented under the form V_{1} = V_{1} V_{2} then dimV_{n} = dimV_{1} + dimV_{2}. 
Let’s consider a Kvector space V_{n} and B = respective B = two bases in V_{n}. Any vector from B can be expressed in function of the elements of the other base. So we have the relations:
_{
}
Noting with B = ^{t}[e_{1},
e_{2},…, e_{n}], B = ^{t}[e _{1},
e _{2},…, e _{n}] and with _{
}
B = ^{t}AB (3.2)
Let it be now a vector x I V_{n}, expressed in the two bases of the vector space V_{n} through relations:
_{
}
With respect to relations (4.2), we obtain
_{
}
With B is a base, the equality_{
}
_{
}
relations which characterize the transforming of a vectors coordinates to a change of the vector space base V_{n} .
If we note with X = ^{t}[x_{1}, x_{2},…,x_{n}] the column matrix of the vectors coordinates x I V_{n } in baza B and respective with X = ^{t}[x _{1}, x _{2},…,x _{n}], the coordinate matrix of the same vector x I V_{n} in baseB , we can write
X = AX (3.4)
Matrix A = (a_{ij}) is called passing matrix from base baza B to base B . In conclusion, in a finite dimensional vector space we have the change of base theorem:
3.12 Theorem. 
If in vector space V_{n}, the change of base B with baseB is given by relation B = ^{t}AB, then the relation between the coordinates of a vector x I V_{n}, in the two bases ,is given by X = AX . 
Let it beV_{n
} a vector space and B = a base of his. If the
vectors v_{1}, v_{2},…, v_{p} I V_{n}, p
n are expressed through the relations v_{j} =_{
}
3.13 Theorem. 
The matrix rang A is equal with the maximum numbers of column linear independent vectors 
Demonstration. Let’s suppose that the rang A = r, that means
D
= _{
}
D 0 implies that the vectors linear independence v_{1}, v_{2}, , v_{r}.
Let it be column v_{k}, r k p and the determinants
D_{i}
= _{
}
Each one of this determinants is null because for i r, D_{i} has two identical lines, and for i > r, the order of D_{i} is greater then the rang r. Developing after the last line we have:
a_{i1}Γ_{i1} +a_{i2} Γ_{i2} +a_{ir} Γ_{ir} + a_{il} D = 0 _{
}
This scalar relations express the fact that any column v_{k}, r k p, is a linear combination of the first r columns of matrix A, so any r + 1 vectors are linear dependents.
3.14 Consequence. 
If B = is a base in V_{n },then the set B = , _{
} 
Let it be V and W two vector spaces over field K.
3.15 Definition. 
An application T : V W with proprieties: T (x + y) = T(x) + T(y), x, y I V T ax aT (x) , x I V a I V Is called vector space morphism or linear transformation. 
A bijective linear transformation between two vector spaces will be called vector spaces isomorphism.
3.16 Theorem. 
Two vector spaces V and W over firld K, of finite dimension, are called isomorphism if and only if they have the same dimension. 
A
system of coordinates on a finite dimensional vector space V_{n}, f : V K^{n} , x I V_{n} _{
}
§4. Euclidiene vector spaces
Let it be V a real vector space.
If we add, beside the vector space structure, the notion of scalar product, then in a vector space can be defined notions of vector length, the angle of two vectors, ortoghonality a.o.
4.1 Definition. 
An application g: V V R, _{
} 


a) _{
} 
x, y, z I V 



b) _{<}_{l x, y> = }_{l <x, y>} 
x, y I V, l I R 



c) <x, y> = <y, x> 
x, y I V 



d) <x, x> 0, <x, x> = 0 x = 0 
, x I V 
is called scalar product on vector space V.
1) <x + y, z> = <x, z> + <y, z> 2) <x, ly> = l <x, y>, x, y, z I V l I R
<x, y>^{2} <x, x> <y, y> (4.1) the equality having place if and only if vectors x si y are linear dependents. Demonstration: If x = 0 or y = 0 then we have the equality fro relation 5.1. Let’s
suppose x and y _{
} 0 <z, z> = <lx + my, lx + my> = l^{2 }<x, x> + 2lm <x, y> + m^{2 }<y, y>, equality having place for z = 0. If we take l = <y, y> > 0 then we obtain <x, x> <y, y> + 2lm<x, y> + m^{2} 0, and for m =  <x, y> inequality becomes <x, x> <y, y>  <x, y>^{2} 0, q.e.d Example1° In arithmetic space R^{n} for any two elements x=(x_{1},x_{2},,x_{n}) and y = (y_{1}, y_{2},, y_{n}), the operation <x, y> =: x_{1}y_{1 }+ x_{2}y_{2} ++ x_{n}y_{n} (4.2) define a scalar product. The scalar product defined in this way, called usual scalar product, endowedrithmetic space R^{n} with an Euclidean structure. 2° The set C([a, b]) of continuous functions on the interval [a, b] is a vector space with respect to the product defined by _{
}
Demonstration: Conditions a) and b) result immediately from the norm definition and the scalar product properties. Axiom c) results using CauchySchwarz inequality _{
} _{
} from where results the triangle inequality. A space on which we have defined a “norm” function is called normed space. The norm defined by a scalar product is called euclidean norm. Example: In arithmetic space R^{n} the norm of a vector x = (x_{1}, x_{2},…x_{n}) is given by _{
} A vector e I V is called versor if e = 1. The notion of versor permits to x I V to be written as _{
} CauchySchwarz inequality, <x, y> x y permits to define the angle between two vectors, being the angle q I [0, p], given by _{
}
Example: In vector arithmetic space R^{n} the distance d is given by _{
} An ordinary set endowed with the distance is called the metric space. If the norm defined on vector space V is euclidean then the defined distance by this is called euclidean metric. In conclusion, any euclidean space is a metric space. An euclidean structure on V induces on any subspace V’ V an euclidean structure. The scalar product defined on a vector space V permits the introduction of ortoghonality notion.
A set S V is said to be ortoghonal if its vectors are ortoghonal two by two. An ortoghonal set is called orthonormate if every element has its norm equal with his unit.
Demonstration Let it be S V and l_{1}x_{1} + l_{2}x_{2}
+…+ l_{n}x_{n},
a linear ordinary finite combination of
elements from S. Scalar multiplying
with x_{j} I S, the relation_{
} S being orthogonal, <x_{i}, x_{j}> = 0,
i
j and l_{j}(x_{j}, x_{j}) = 0. For x_{j}
0,
_{
}
If in euclidean vector space V_{n} we consider an orthogonal base B = , then any vector x I V_{n} can be written in an unique way under the form _{
} Indeed, multiplying the vector_{
} If B is orthonormate we have_{
}
Demonstration: If y_{1}, y_{2} I S^{ } then (y_{1}, x) = 0, <y_{2}, x> = 0, x I S. For a b I R, we have <ay_{ } by_{ }, x> = a<y_{1}, x> + b<y_{2}, x> = 0, c.c.t.d.
Let it be V_{n} a finite dimensional euclidean vector space.
Demonstration First we build an orthogonal set and then we norm every element. We considerw_{1} = v_{1} , w_{2} = v_{2} + kw_{1} 0 and determine k imposing the condition <w_{1}, w_{2}> = 0. We obtain _{
} w_{3} = v_{3} + k_{1}w_{1} + k_{2}w_{2} 0 and determine the scalars k_{1}, k_{2} imposing condition w_{3} to be orthogonal on w_{1} and w_{2}, that means that <w_{3}, w_{1}> = <v_{3}, w_{1}> + k_{1} <w_{1}, w_{1}> = 0 <w_{3}, w_{2}> = <v_{3}, w_{2}> + k_{2} <w_{2}, w_{2}> = 0. We obtain_{
} After n steps are obtained vectors w_{1}, w_{2}, , w_{n} orthogonal two by two, linear independent (prop. 5.1) given by _{
} Define _{
} With elements e_{1}, e_{2}, , e_{p} are expressed in function of v_{1}, v_{2}, , v_{p}, and these are linear independent subsystems we have L () = = L (), c.c.t.d.
Let it be B = and B = two orthonormate base in euclidean vector space V_{n}. The relations between the elements
of the two bases are given by _{
} With B being orthonormata we have : _{
} If A = (a_{ij}) is the passing matrix from base B to B then the anterior relations are the form ^{t}AA = I_{n}, so A is an orthogonal matrix.
§5. Proposed problems 1. Let it be V and W two Kvector spaces. Show that V × W = = is a Kvector space with respect to the operations: (x_{1}, y_{1}) + (x_{2}, y_{2}) =: (x_{1 }+ x_{2, }y_{1 }+ y_{2}) a (x, y) =: (a x, a y x_{ }, x_{2} I V, y_{1}, y_{2} I W a I K 2. Say if the operations defined on the indicated sets determine a vector space structure:
. Let it be V a real vector space. We define on V × V the operations:
Show that V × V is a vector space over complex number field C (this space will be called the complexificate of V and will be noted with ^{C}V ) . Establish which of the following subsets forms vector subspaces in the indicated vector spaces a) S_{ } b) S_{ } c) S_{ } d) S_{4} = e) S_{5} = 5. Let it be F_{[a,b]} – the real functions set defined on the interval [a, b] R. a) Show that the operations: (f + g)(x) = f(x) + g(x) (a f)(x) = a f(x), a I R, x I [a, b] define a structure of Rvector space on set F _{[a,b]}. b) If the interval [a, b] R is symmetric with the origin, show that the subsets F_{+} = (odd functions) and F_{} = (even functions) are vector subspaces and F _{[a,b]} = F_{ }_{+} F_{ } . 6. Show that the subsets S = (symmetric matrix) A = (antisymmetric matrix) Are vectorial subspaces and M_{n} (K) = S A. 7. Let it be v_{1}, v_{2}, v_{3} I V, three linear independent vectors. Determine a I R such that the vectors _{
} to be linear independents, respective linear dependents. 8.Show that the vectors x,y,zIR^{3}, x = (1,1,1), y = (1,1,1), z = (1,3,3), are linear dependent and find the relation linear dependence. 9. Establish the dependence or linear independence of vectors systems: a) S_{1} = b) S_{2} = c) S_{3} = 10) Determine the sum and intersection of the vector subspaces U, V R^{3}, where U = V = 11) Determine the sum and intersection of the subspaces generated by the vector’s systems: U V = ) Determine the subspaces U V R^{ }where U V = L() ) Determine a base in subspaces U + W, U W and verify the Grassmann’s theorem for a)
_{
} b)
_{
} ) Let it be the subspace W_{1 } R^{ } generated by vectors w_{1} = (1,1,0) and w_{2} = (1,1,2). Determine the suplementary subspace W_{2} and decomposedescompuna vector x = (2, 2, 2) on the two subspaces. ) Show which of the following vector’s systems forms bases in the given vector spaces: a) S_{1} = R^{2} b) S_{ } R^{ } c) S_{3} = R^{3} d) S_{4} = R_{3}[x] e)
S_{5}
= _{
} ) In R^{3 }we consider the vector’s systems B = and B = . Show that B and B are bases and determinethe passing matrix from base B to base B and the coordinates of the vector v = (2, 1, 1) (expressed in canonic base) in rapoert with the two bases. 17) Let it be the real vector space M_{2}(R) and the canonic base B = _{
} a) Find a base B_{ } respective B_{ } in the symmetric matrices subspace S_{2} M_{2}(R) and respective in the antisymmetric matrices subspace A_{2} M_{2}(R). Determine the passing matrices from the canonic base B to base B = B_{ } B_{ }. b)
Express the matrix E = _{
} 18) Verify if the following operations define scalar products on the considerated vector spaces a) <x, y> = 3x_{1}y_{1} + x_{1}y_{2 }+ x_{2}y_{1} + 2x_{2}y_{2 }, x = (x_{1},x_{2}), y = (y_{1}, y_{2}) I R^{ } b) <x, y> = x_{1}y_{1}  2x_{2}y_{2 }, x = (x_{1}, x_{2}), y = (y_{1}, y_{2}) I R^{ } c) <x, y> = x_{1}y_{1} + x_{2}y_{3 }+ x_{3}y_{2 }, x = (x_{1}, x_{2}, x_{3}), y = (y_{1}, y_{2}, y_{3}) I R^{ } 19) Show that the operation defined on
the polynoms set R_{n}[x] through
<f, g> =_{
} 20) Verify that the following operations determine scalar products on the specificated vector spaces and orthonormate with respect to these scalar products the functions systems and respective a)
<f,
g> = _{
} b)
<f,
g> = _{
} 21) Let it be vectors x = (x_{1}, x_{2}, …, x_{n}), y = (y_{1}, y_{2}, …, y_{n}) I R^{n}. Demonstrate using the usual scalar product defined on the arithmetic space R^{n}, the following inequalities: a)
_{
} b)
_{
} and determine the conditions in which take place the equalities. 22) Orthonormate the vectors system with respect to the usual scalar product a) v_{1} = (1, 2, 2), v_{2} = (1, 0, 1), v_{3} = (5, 3,7) b) v_{1} = (1, 1 , 0), v_{2} = ( 1, 0, 1 ) , v_{3}= (0, 0, 1) . 23) Find the orthogonal projection of vector v = (14, 3, 6) on the subspace generated by vectors v_{1} = (3, 0, 7), v_{2} = (1, 4, 3) and the size of this projection. 24) Determine in the arithmetic space R^{3}, the orthogonal complement of the vector subspace of the system’s solutions _{
} and find an orthonormate base in this complement. 25) Orthonormate the following linear independent vector systems: a) v_{ } = (1, 1, 0), v_{2} = (1, 0, 1), v_{3} = (0, 0, 1) in R^{3} b) v_{ } = (1,1,0,0), v_{2} = (1,0,1,0), v_{3} = (1,0,0,1), v_{4} = (0,1,1,1) in R^{4}. 26) Determine the orthogonal complement of the subspaces generated by the following vector’s system: a) v_{ } = (1, 2, 0), v_{2} = (2, 0, 1) in R^{3} b) v_{ } = (1, 1, 2, 0), v_{2} = (3, 0, 2, 1), v_{3} = (4, 1, 0, 1) in R^{4} ) Find the vector’s projection v = (1, 1, 2) on the solution subspace of the equation x + y + z = 0. ) Detremine in R^{3} the orthogonal complement of the subspace generated by vectors v_{1} = (1, 0, 2), v_{2} = (2, 0, 1). Find the decomposition v = w + w^{1} of vector v = (1, 1, 1) I R^{ } on the two complementary subspaces and verify the relation v^{2} = w^{2} + w^{1}^{2}.
Politica de confidentialitate DISTRIBUIE DOCUMENTUL
Comenteaza documentul:Te rugam sa te autentifici sau sa iti faci cont pentru a putea comentaCreaza cont nou Termeni si conditii de utilizare  Contact Distribuie URL Adauga cod HTML in site 