- Orthogonal complements
- dim(v) + dim(orthogonal complement of v) = n
- Representing vectors in rn using subspace members
- Orthogonal complement of the orthogonal complement
- Orthogonal complement of the nullspace
- Unique rowspace solution to Ax = b
- Rowspace solution to Ax = b example
Unique rowspace solution to Ax = b
Showing that, for any b that is in the column space of A, there is a unique member of the row space that is the "smallest" solution to Ax=b. Created by Sal Khan.
Want to join the conversation?
- If AX=b has a unique solution, does AX=0 have a unique solution?(4 votes)
- The fact that AX=b implies that every column of the reduced row echelon form of A is a pivot column. If that were not the case, the solution would consists of free variables, leading to multiple solutions for the system Ax=b. So because the reduced row echelon form of A has no free variables, the system Ax=b can have only one solution at the most, and so any attempt to solve Ax=0 will yield x=0 as the only solution.(7 votes)
- if r0 is already unique, how can even we argue that it is the smallest?(5 votes)
- r0 is unique amongst the vectors in the rowspace. There can be other vectors outside of the rowspace that also solve Ax = b, but any solution must be able to be constructed as the sum of a rowspace vector and a nullspace vector. If the nullspace vector is 0 then the solution in question is the unique rowspace solution, but if it is not 0 then it is a non-rowspace solution and must have length greater than the rowspace solution.(4 votes)
- Is there a branch of calculus applicable to linear algebra so that one could use something akin to a second derivative test to find the minimum that Sal speaks of at the end of this video?(5 votes)
- I don't think you need any kind of derivatives for this :) In other words, it's a little bit different minimum than those from calculus. r0 is only combination of row vectors so you need basic algebra to find that combination. Look onto next video for better explanation if you haven't done that already.(2 votes)
- This is a general question but could solve most of my confusions in this set of videos.
When we are talking about a matrix A, are we specifically taking about a square matrix??(3 votes)
- Not necessarily.(2 votes)
- Just to clear things out in my mind, in two previous video's (Representing vectors in rn using subspace members, Orthogonal complement of the orthogonal complement) we've seen that when you have a subspace V of RN, and it's orthogonal complement W, that the dimension of V + the dimension of W is equal to the entire subspace - meaning that it's equal to N. And to specify this further, in a previous video we've also seen that the two basisses combined span all of RN.
To get to my point, doesn't this mean that when Khan draws the representations of V and W in RN at about3:30, that V is a circle in within the circle of RN, and that W doesn't need a circle because it covers the rest of RN?(1 vote)
- The bases of V and W might span all of Rn, but the union of V and W do not cover Rn.
I'll give a concrete example. Consider the case in R3. Let dim(V)=2, such that V is a plane in R3. Then dim(W)=1, a line in R3 that is orthogonal to the plane defined by V.
Clearly, the union of V and W do not cover all of R3. This plane+line is only a subset of R3 (not a subspace).
However, if we take the sum of a vector in V and a vector in W, we can get any arbitrary vector in R3.(3 votes)
- At16:22in the video, since ||x|| = ||r0|| + ||n0||, and r0 and n0 are known to be always orthogonal (at right angles) in R^n, doesn't the statement ||x||² = ||r||² + ||n||² basically prove the Pythagorean thoerom for R^n?(2 votes)
- Sal shows a pictorial representation of N(A) and N(A)(complement) within R^n.
Is N(A) nullspace and its complement inside the set A (represented by a matrix with series of vector), and A which is in R^n? if yes, is A called a subspace that contains the vectors or is it a set of vectors?(2 votes)
- A with dimensions mxn is a subspace of Rm, while its null space and row space (orthogonal compliment of the null space) are in Rn since there are m rows and n columns in A, and the null spacehas to be multiplied by A to get 0, or in other words the number of rows in the null space has to equal the number of columns in A. Let em know if that doesn't make sense.
The column space of A would be in A though, because each column has n rows and by definition are the span of A.(1 vote)
- Hey so I am a little confused about something...when we talk of Rn I tend to think of it as the collection of all the dimensions , but now, I am questioning, because the matrix A (m by n) and I assume the n in the matrix , is only one number. So, doesn't that mean that the matrix A is saying that Rn is not the collection of all dimensions, but only a particular dimension (for them to match up nicely?)(1 vote)
- You are in the right track in your latter argument. Formally speaking, R^n is the collection of all nx1 column vectors (aka n dimensional column vectors). As it turns out, not every "n dimensional thing" is a column vector with n different numbers. There are other things out there that are "n-dimensional objects", but are not column vectors.
I think where your initial confusion came from, is that we don't let n range over all possible values. For each problem, we fix n to be a single positive integer.(2 votes)
- I am awkward in English.
"9:00" (r0 - r1) is C(At)..
why r0 - r1 is closed to C(At)?(2 votes)
- I am not sure that I have understood your question correctly, so I will try and re-state it as I understand it:
Why is (r0 - r1) an element of C(At)?
I won't go into depth, but refer you to Sal's videos where he does that already.
First, both r0 and r1 are column spaces - row spaces are just column spaces of a transposed matrix, so they are still column spaces)
Second, the column space is a sub space (see "Column Space of a Matrix1:30")
Third, sub spaces are closed under multiplication by a constant and also under addition. (see "Linear Subspaces3:00") Combining these two principles means that they are also closed under subtraction; take two elements, multiply one by -1 and then add them together.
Combine all of this together: r0 and r1 are in the same sub space, so r0 - r1 must produce another element in the same sub space. In this case, the sub space is C(At), so the new element is also a member of C(At).(0 votes)
- What's the point of having a solution to Ax=b,sorry for this dumb question but I am not really getting the point.(1 vote)
- Well, the whole point of math is to solve "different weird things" to solve even more :P And then it always turns out that this "different weird things" do have application in real life. Solution for Ax=b is for linear algebra almost like adding two numbers for basic math. So your question can be rewritten as "why do we need linear algebra?". Without our knowledge in linear algebra there wouldn't be computer graphics, search engines (like Google one) and many other things.(2 votes)
Let's say I've got an m-by-n matrix A. That's my matrix right there. And I could just write it as a series of n column vectors, so it could be a1, a2, all the way to an. Now, let's say that I have some other vector b. Let's say b is a member of the column space of A. Remember, the column space is just the set of all of the vectors that can be represented as a linear combination of the columns of A, so that means that b can be represented as a linear combination of the columns of A. So I'll just write the constant factors as x1 times a1 plus x2 times a2, all the way to plus xn times an, where x1, x2, xn, they're all just arbitrary real numbers. Another way to state this is that that means that a, which I could write as a1, a2, all the way to an, times some vector x1, x2, all the way to xn, is equal to b. These two statements are equivalent. We know that b is a member of the column space. That means that b can be represented as a linear combination of the columns of A, and that this statement right here can be rewritten this way. So you can write that the equation Ax equals b has at least one solution x that is a member of Rn. And the entries of x would represent the weights on the column vectors of A to get your linear combination b. This is all a bit of review. Now, let's draw Rn. Any solution to this equation right here is going to be a member of Rn. Remember, this is an m-by-n matrix. We had n columns and this has to be a member of Rn right there, so let's draw Rn. So Rn maybe looks like that, so that is Rn. And let's look at some of the subspaces that we have in Rn. We have the null space. That's going to be in Rn. The null space is all of the solutions to the equation Ax is equal to 0. That's going to be in Rn. It's all of the x's that satisfy that equation, so let me draw that right here. So let's say I have the null space right there, so that is the null space of A. And then what else do we have in Rn? Well, we have the orthogonal complement of the null space of A. Let me draw that. So we have the orthogonal complement-- let me doing it in a different color. We have the orthogonal complement of the null space of A, which we can also call-- we learned this in the last video. This is also going to be equal to the row space of A, which is also just the column space. The row space of A is the column space of A transpose. So we have two spaces here. That is the row space of A. So I have two subsets of Rn. I have the null space and then I have the null space's complement, orthogonal complement, which is the row space of A. Now, we've seen in several videos now, and I proved it I think two videos ago, that any vector in Rn can be represented as a sum of a member of our null space, let's call that vector n, and let's say some vector in our row space, let's call that vector r. Any vector in Rn can be represented as a sum of some vector in our null space and some vector in our row space. So any solution to this equation is a member of Rn so it must be able to be represented by some member of our null space and some member of our row space. So let's write that down. So let's say x is a solution to Ax equals b, which also means that x is a member of Rn, so because it's a member of Rn, we can represent it as a combination of one vector here and one vector there. So let's say that x is equal to some vector r0, plus n0, where r0 is a member of our row space and n0 is a member of the row space's orthogonal complement. They are the orthogonal complements of each other, so n0 is a member of our null space. Fair enough. Now, one thing we might wonder is, clearly this vector isn't a solution to Ax equals b. This vector is a solution to Ax is equal to 0. But we might be curious as to whether this solution right here, this member of our row space is a solution to Ax is equal to b. This is kind of what we're focused on in this. So let's solve for r0 right here. So if we solve for r0, if we subtract n0 from both sides, we get r0 is equal to x minus n0. All I did was subtract n0 from both sides and I switched things around. I solved for r0. Now, if we multiply, A times r0 is equal to A times this whole thing-- let me switch colors-- that's equal to A times x0 minus n0, which is equal to Ax minus An0. And what is this equal to? Well, A times x, we already said that x is a solution to Ax equals b, so this right here is going to be equal to b. And n0 is a member of our null space, which means it satisfies this solution right here, that A times any member of our null space is going to be equal to the zero vector. So that's going to be equal to the zero vector. So you have the vector b minus the zero vector, and you're just going to have the vector b. So we just found out that A times this member of our row space-- let's call that r0, that's that guy right there maybe. A times r0 is equal to b. So this is a solution. So r0 is a solution to Ax is equal to b. So far, it 's kind of an interesting result that we have already. If you give me any vector here b that is a member of our column space, then there is going to be some member of our row space right here that is a solution to Ax is equal to b. Now, the next question you might wonder is, is this the only guy in our row space that is a solution to Ax is equal to b? And to prove that, let's assume that there's another guy here. Let's say that r1 is a member of our row space and a solution to Ax is equal to b. Now, the row space is a valid subspace, so if I take the sum or the difference of any two vectors in the row space, I'll get another member of the row space. That's one of the requirements for being a valid subspace. So let's see this. So if I take two members of our subspace, so if I take r1 minus r0 and I take their difference, which is just the sum-- well, you multiply 1 times the negative and that has to be a member of the subspace when you're summing them, so this has to be a member of our subspace. So this must also be a member of our of our row space. That's because our row space is a valid subspace. You get two of its members, you take its difference, that also has to be a member. Fair enough. Now, let's see what happens when you multiply this guy by A. So if I take A times r1 minus r0, what do I get? I get A times r1 minus A times r0. We already figured out, or for r1 we assumed that it is a solution to Ax is equal to b, and r0, we already found out, it is a solution to Ax equal to b. So either of these, when you multiply them by A, it equals b. So this equals b and that equals b, so you get b minus b, which is the zero vector. Now, this is interesting. This tells us that r minus r0 is a solution to the equation Ax is equal to 0, right? When I put r1 minus r0 in the place of x right there and I multiplied it times the A, I got 0. I got 0, which implies that r1 minus r0, that this vector is a member of our null space. So I have a vector here that's a member of my row space, and we got that from the fact that both of these are members of our row space and the row space is closed under addition and subtraction, and the vector r1 minus r0 is a member of my null space. And we've seen this several times already. If I have a vector that is in a subspace and it's also in the orthogonal complement of the subspace, the null space is also the orthogonal complement of the row space, then the only possible vector that that can be is the zero vector. That's the only vector that's inside of a subspace and it's orthogonal complement or a subspace and it's orthogonal complement. These two guys are the orthogonal complements of each other. We drew it up here. So we get that r1 one minus r0 must be equal to the zero vector. That's the only vector that's in a subspace and its orthogonal complement, which implies that r1 must be equal to r0. When we take the difference, we get the zero vector. So we have a couple of neat results here. What do we know so far? We know that if we have some vector b that is a member of our column space of A, then there exists a unique member, right? We just proved the uniqueness. There exists a unique member of the row space of A. Let me write it. Let me do it in a different color. Of the row space of A, so this is the row space of A such that a unique member of the row space of A, let me call that r0. Let me do it in a different color. I want to make this really stand out in your brain. So we know that r0 is a member of the row space of A such that r0 is a solution to Ax is equal to b. It's a little bit of a complex statement here, but it's interesting. You give me any b that's a member of the column space of A, then there will exist a unique member of the row space of A, that's my unique member of the row space of A, that is a solution to Ax is equal to b. Now, we can go further with this. We can go further. We wrote up here that any solution to this equation Ax is equal to b can be written as a sum of r0 plus n0, where r0 is a member of our row space and n0 is a member of our null space, and that's because we have a subspace and its orthogonal complement. So any member of Rn can be represented as a sum of a subspace and a member of the subspace's orthogonal complement. Let me rewrite that down here. So we already said that any solution x to Ax is equal to b can be written as a combination-- let me write it this way-- as a combination of r0 plus n0. Fair enough. Now, what happens if I wanted to take the square of the length of x on both sides of that. Let me write this down, and you'll see why I'm writing this, because I have another interesting result to show you. So if I were take the square of any solution to this equation right here, well, that's going to be the same thing as x dot x, which is the same thing as this thing dot itself, same thing as r0 plus n0 dot r0 plus n0. And what is this equal to? This is equal to r0 dot r0 plus n0 dot r0 plus n0 dot r0 again plus n0 dot n0. I just kind of foiled it out and we can do that because we know the dot product exhibits the distributive property. So this thing right here is equal to the length of r0 squared. Now, we're going to have-- what is n0 dot r0? We don't even have to simplify this much more. n0 is a member of our null space. r0 is a member of our row space. Each of them is in a subspace that is the orthogonal complement of the other, which means that everything here dotted with anything in here is equal to 0. So r0 dot n0 is going to be equal to 0. These guys are orthogonal to each other, so that's going to be equal to 0, that's going to be equal to 0, and then you get plus-- what's this? n0 dot n0 is just the length of the vector n0 squared. These are all vectors. And so we get the length of the vector x squared is equal to the length of our member of our row space squared, our unique member of our row space squared, plus that member of our null space squared. Now, this is definitely going to be a positive number. It's at minimum 0, but it has to be something larger than 0, so we can say that this quantity right here is definitely greater than or equal to just r0 squared. Or another way to think about it is, you give me any solution to the equation Ax is equal to b, and the square of its length is going to be greater than or equal to the square of r0's length. Or since both of the lengths are always positive, you can take kind of the positive square root and you know you won't have to switch signs there, that the length of any solution to Ax equals b is going to be greater than or equal to the length of r0. So that makes r0 kind of a special solution. So now let's write our entire statement, everything that we've learned in this video. So if b is a member of the column space of A, then there exists a unique r0 that is a member of the row space of A, such that r0 is a solution to Ax is equal to b. And not only is it a solution, it's a special solution. r0 is the solution with the least, or no solution has a smaller length than r0. Let me write it that way. Maybe some other solution could be equal but could have the same length. And no other solution can have a smaller length. Maybe we could write that if you give me any vector b that is a member of the column space of A, then there exists a unique member of the row space that is essentially the smallest solution. You can write small as having the least length to Ax is equal to b, which is a pretty neat outcome. In the next video, we'll explore this a little bit more visually.