If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Linear algebra

### Course: Linear algebra>Unit 1

Lesson 2: Linear combinations and spans

# Linear combinations and span

Understanding linear combinations and spans of vectors. Created by Sal Khan.

## Want to join the conversation?

• Around when Sal gives a generalized mathematical definition of "span" he defines "i" as having to be greater than one and less than "n". Is this because "i" is indicating the instances of the variable "c" or is there something in the definition I'm missing? • i Is just a variable that's used to denote a number of subscripts, so yes it's just a number of instances. If you don't know what a subscript is, think about this. If you wanted two different values called x, you couldn't just make x = 10 and x = 5 because you'd get confused over which was which. So you call one of them x1 and one x2, which could equal 10 and 5 respectively.
• Does Sal mean that to represent the whole R2 two vectos need to be linearly independent, and linearly dependent vectors can't fill in the whole R2 plane? •  Yes. And, in general, if you have n linearly independent vectors, then you can represent Rn by the set of their linear combinations. If you have n vectors, but just one of them is a linear combination of the others, then you have n - 1 linearly independent vectors, and thus you can represent R(n - 1). So in the case of vectors in R2, if they are linearly dependent, that means they are on the same line, and could not possibly flush out the whole plane.

Feel free to ask more questions if this was unclear. Cheers
• What would the span of the zero vector be? Would it be the zero vector as well?

My text also says that there is only one situation where the span would not be infinite. I thought this may be the span of the zero vector, but on doing some problems, I have several which have a span of the empty set. This happens when the matrix row-reduces to the identity matrix. So in which situation would the span not be infinite? • I understand the concept theoretically, but where can I find numerical questions/examples... • I'm really confused about why the top equation was multiplied by -2 at . Surely it's not an arbitrary number, right? • Sal was setting up the elimination step. The next thing he does is add the two equations and the C_1 variable is eliminated allowing us to solve for C_2. Multiplying by -2 was the easiest way to get the C_1 term to cancel.

Another question is why he chooses to use elimination. The first equation is already solved for C_1 so it would be very easy to use substitution. He may have chosen elimination because that is how we work with matrices.
• At when he is describing the i and j vector, he writes them as [1, 0] and [0,1] respectively yet on drawing them he draws them to a scale of [2,0] and [0,2]. Is this an honest mistake or is it just a property of unit vectors having no fixed dimension? • Shouldnt it be 1/3 (x2 - 2 (!!) x1) 18 min in? Pretty sure. • At , Sal "adds" the equations for x1 and x2 together. I don't understand how this is even a valid thing to do. The first equation finds the value for x1, and the second equation finds the value for x2. I get that you can multiply both sides of an equation by the same value to create an equivalent equation and that you might do so for purposes of elimination, but how can you just "add" the two distinct equations for x1 and x2 together? What does that even mean? • You know that both sides of an equation have the same value. Let's call that value A.

You can add A to both sides of another equation.

But A has been expressed in two different ways; the left side and the right side of the first equation. Let's call those two expressions A1 and A2.

Remember that A1=A2=A. Since you can add A to both sides of another equation, you can also add A1 to one side and A2 to the other side - because A1=A2.

------------------------------------

Another way to explain it - consider two equations:
L1 = R1
L2 = R2

Add L1 to both sides of the second equation:
L2 + L1 = R2 + L1

Since L1=R1, we can substitute R1 for L1 on the right hand side:
L2 + L1 = R2 + R1

And that's pretty much it.

------------------------------------

If that's too hard to follow, just take it on faith that it works and move on.
• In the video at , Sal says we are in R^n, but then the correction says we are in R^m. Why does it have to be R^m? Is it because the number of vectors doesn't have to be the same as the size of the space?  • It's true that you can decide to start a vector at any point in space. But the "standard position" of a vector implies that it's starting point is the origin. If nothing is telling you otherwise, it's safe to assume that a vector is in it's standard position; and for the purposes of spaces and `span`, all vectors are considered to be in standard position.