# What is the Basis of Vector Space

Linear Algebra / Monday, October 29th, 2018
(Last Updated On: November 8, 2018)

# Basis of a vector space

A basis of a vector space is a set of vectors of the space that (i) are linearly independent and (ii) span the vector space.

We have learnt that a given set of vectors spans or generates a certain vector space when every vector in the space can be expressed as a linear combination of the given set. Obviously there can be more than one set that can span a certain vector space; moreover the number of vectors in each generating set can vary.

 Example

The set {(1, 0), (0, 1)} spans the vector space of all two dimensional real coordinate vectors; also, the set {(1, 2), (2. 1), (3, 3)} spans the same vector space. Every vector in the space can be expressed as a linear combination of each of the given sets, and therefore each set is a spanning set even though the sets are different in both content and number of elements.

There is, however, a distinction between two kinds of spanning sets; that is, for given vector space, some spanning sets are linearly independent and some are linearly dependent. Those spanning sets that are linearly independent are very important in the study of linear algebra; such a set is called a basis of the vector space.

 Example 01

The set {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is a basis for the vector space of all coordinate vectors with three real components over the field of real numbers because the set is linearly independent and the set spans the space.

 Example 02

The set {(0, 1), (1, 0), (1, 1)} is not a basis of the vector space of two dimensional real coordinate vectors over the field of real numbers, because the set is not linearly independent. The set does, however, span the space.

 Theorem

If {β1, β2, …, βn} is a basis of a vector space V over a field F. Then each vector in V can be expressed uniquely as a linear combination of β1, β2, …, βn.

Proof:

Since { β1, β2, …, βn} spans the vector space V and vector β ∈ V can be expressed as a linear combination of β1, β2, …, βn.

Let us suppose that

β = a1β1 + a2β2 + … + anβn and β = b1β1 + b2β2 + … + bnβn

where ai’s and bi’s ∈ F, are two such expressions of β. Theorem will be proved if we can show that ai = bi , i = 1, 2, …, n

We have β = a1β1 + a2β2 + … + anβn = b1β1 + b2β2 + … + bnβn

⇒ (a1 – b1) β1 + (a2 – b2) β2 + … + (an – bn) βn = θ

Since β1, β2, …, βn are linearly independent, it follows that a1 – b1 = 0, a2 – b2 = 0, …, an – bn = 0

Therefore ai = bi , i = 1, 2, …, n.

Hence the theorem is proved.

 Theorem (Replacement Theorem)

If {α1, α2, …, α n} be a basis of a vector space V over a field F and a non-zero vector β of V is expressed β = c1 α1 + c2 α2 + … + cn αn , ci ∈ F, then if cj ≠ 0, {α1, α 2, …, α j-1, β, αj+1, …, α n } is a new basis of V. [that is β can replace αj in the basis]

Proof:

β = c1 α1 + c2 α2 + … + cj-1 αj-1 + cj αj + cj+1 αj+1 + … + cn αn

⇒ cj αj = β – c1 α1 – c2 α2 – … – cj-1 αj-1 – cj+1 αj+1 – … – cn αn ……….(1)

Since cj ≠ 0, cj-1 exists in F.

Therefore multiplying both sides of (1) by cj-1 we get

αj = cj-1 β – cj-1 c1 α1 – cj-1 c2 α2 – … – cj-1 cj-1 αj-1 – cj-1 cj+1 αj+1 – … – cj-1 cn αn

⇒ αj = p1 α1 + p2 α2 + … + pj-1 αj-1 + cj-1 β + pj+1 αj+1 + … + pn αn

Where pi = – cj-1 ci, if i ≠ j and pi = – cj-1, if i=j

We first prove that {α1, α 2, …, α j-1, β, αj+1, …, α n } is linearly independent.

Let us suppose that

d1 α1 + d2 α2 + … + dj-1 αj-1 + dj β + dj+1 αj+1 + … + dn αn = θ

⇒ d1 α1 + d2 α2 + … + dj-1 αj-1 + dj (c1 α1 + c2 α2 + … + cj-1 αj-1 + cj αj + cj+1 αj+1 + … + cn αn) + dj+1 αj+1 + … + dn αn = θ

⇒ (d1 + dj c1) α1 + (d2 + dj c2) α2 + … +(dj-1 + dj cj-1) αj-1 + djcjαj + (dj+1 + dj cj+1) αj+1 + … +(dn + dj cn) αn

Since {α1, α2, …, α n} is linearly independent, we have

d1 + dj c1 = 0, d2 + dj c2 = 0,…, dj-1 + dj cj-1 = 0, djcj = 0, dj+1 + dj cj+1 = 0, …, dn + dj cn = 0

Since djcj = 0, but cj ≠ 0 we have dj = 0, hence it follows that d1 = d2 = … = dn = 0

This proves that, {α1, α 2, …, α j-1, β, αj+1, …, α n } is linearly independent set of vectors.

We now prove that L{α1, α 2, …, α j-1, β, αj+1, …, α n } = V

Let δ be any arbitrary vector in V. Since {α1, α2, …, α n} is a basis of V, there exists k1, k2, …, k n ∈ F such that

δ = k1 α1 + k2 α2 + … + kn αn

⇒ δ = k1 α1 + k2 α2 + … + kj-1 αj-1 + kj (p1 α1 + c2 α2 + … + pj-1 αj-1 + pj β+ pj+1 αj+1 + … + pn αn) + pj+1 αj+1 + … + pn αn

⇒ δ = s1 α1 + s2 α2 + … + sj-1 αj-1 + sj β + sj+1 αj+1 + … + sn αn

Where si = ki + kjpi, i≠j and si = kjpi, i=j

Therefore δ ∈ L{α1, α 2, …, α j-1, β, αj+1, …, α n }

i.e., V ⊆ L{α1, α 2, …, α j-1, β, αj+1, …, α n } ……….(2)

But L{α1, α 2, …, α j-1, β, αj+1, …, α n } being the smallest subspace containing the set {α1, α 2, …, α j-1, β, αj+1, …, α n }.

Therefore, L{α1, α 2, …, α j-1, β, αj+1, …, α n } ⊆ V ………(3)

Combining (2) and (3)

V = L{α1, α 2, …, α j-1, β, αj+1, …, α n }

Hence {α1, α 2, …, α j-1, β, αj+1, …, α n } is a basis of V.

 Example 03

Prove that the set S = {(1, 0, 1), (0, 1, 1), (1, 1, 0)} is a basis of R3.

Solution:

Let α1 = (1, 0, 1), α2 = (0, 1, 1), α3 = (1, 1, 0)

We have to show that α1, α2, α3 are linearly independent and that L(S) = R3.

To show that α1, α2, α3 are linearly independent, we suppose that there exists c1, c2, c3 ∈ F such that

c1 α1 + c2 α2 + c3 α3 = θ

⇒ c1 (1, 0, 1) + c2 (0, 1, 1) + c3 (1, 1, 0) = (0, 0, 0)

⇒ c1 + c3 = 0, c2 + c3 = 0, c1 + c2 = 0

⇒ c1 = c2 = c3 = 0

Hence α1, α2, α3 are linearly independent.

To show that L(S) = R3, we suppose that ξ = (a, b, c) be any vector in R3. ξ will belong to L(S) if we can find r1, r2, r3 ∈ F such that

ξ = r1 α1 + r2 α2 + r3 α3 = θ.

This requires, (a, b, c) = r1 (1, 0, 1) + r2 (0, 1, 1) + r3 (1, 1, 0)

⇒ r1 + r3 = 0, r2 + r3 = 0, r1 + r2 = 0  …..(1)

The system of equations (1) will have unique solution if the coefficient determinant is non zero.

Once we can find r1, r2, r3 from (1) such that ξ = r1 α1 + r2 α2 + r3 α3

Hence ξ ∈ L(S)

Since L(S) is the smallest subspace of R3 containing S. We have L(S) = R3 containing S, we have L(S) = R3

Hence the given set forms a basis of  R3.