If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

### Course: Linear algebra>Unit 3

Lesson 1: Orthogonal complements

# Unique rowspace solution to Ax = b

Showing that, for any b that is in the column space of A, there is a unique member of the row space that is the "smallest" solution to Ax=b. Created by Sal Khan.

## Want to join the conversation?

• If AX=b has a unique solution, does AX=0 have a unique solution?
• The fact that AX=b implies that every column of the reduced row echelon form of A is a pivot column. If that were not the case, the solution would consists of free variables, leading to multiple solutions for the system Ax=b. So because the reduced row echelon form of A has no free variables, the system Ax=b can have only one solution at the most, and so any attempt to solve Ax=0 will yield x=0 as the only solution.
• if r0 is already unique, how can even we argue that it is the smallest?
• r0 is unique amongst the vectors in the rowspace. There can be other vectors outside of the rowspace that also solve Ax = b, but any solution must be able to be constructed as the sum of a rowspace vector and a nullspace vector. If the nullspace vector is 0 then the solution in question is the unique rowspace solution, but if it is not 0 then it is a non-rowspace solution and must have length greater than the rowspace solution.
• Is there a branch of calculus applicable to linear algebra so that one could use something akin to a second derivative test to find the minimum that Sal speaks of at the end of this video?
• I don't think you need any kind of derivatives for this :) In other words, it's a little bit different minimum than those from calculus. r0 is only combination of row vectors so you need basic algebra to find that combination. Look onto next video for better explanation if you haven't done that already.
• This is a general question but could solve most of my confusions in this set of videos.

When we are talking about a matrix A, are we specifically taking about a square matrix??
• Not necessarily.
• Just to clear things out in my mind, in two previous video's (Representing vectors in rn using subspace members, Orthogonal complement of the orthogonal complement) we've seen that when you have a subspace V of RN, and it's orthogonal complement W, that the dimension of V + the dimension of W is equal to the entire subspace - meaning that it's equal to N. And to specify this further, in a previous video we've also seen that the two basisses combined span all of RN.

To get to my point, doesn't this mean that when Khan draws the representations of V and W in RN at about , that V is a circle in within the circle of RN, and that W doesn't need a circle because it covers the rest of RN?
(1 vote)
• The bases of V and W might span all of Rn, but the union of V and W do not cover Rn.
I'll give a concrete example. Consider the case in R3. Let dim(V)=2, such that V is a plane in R3. Then dim(W)=1, a line in R3 that is orthogonal to the plane defined by V.
Clearly, the union of V and W do not cover all of R3. This plane+line is only a subset of R3 (not a subspace).
However, if we take the sum of a vector in V and a vector in W, we can get any arbitrary vector in R3.
• At in the video, since ||x|| = ||r0|| + ||n0||, and r0 and n0 are known to be always orthogonal (at right angles) in R^n, doesn't the statement ||x||² = ||r||² + ||n||² basically prove the Pythagorean thoerom for R^n?
• Sal shows a pictorial representation of N(A) and N(A)(complement) within R^n.

Is N(A) nullspace and its complement inside the set A (represented by a matrix with series of vector), and A which is in R^n? if yes, is A called a subspace that contains the vectors or is it a set of vectors?
• A with dimensions mxn is a subspace of Rm, while its null space and row space (orthogonal compliment of the null space) are in Rn since there are m rows and n columns in A, and the null spacehas to be multiplied by A to get 0, or in other words the number of rows in the null space has to equal the number of columns in A. Let em know if that doesn't make sense.

The column space of A would be in A though, because each column has n rows and by definition are the span of A.
(1 vote)
• Hey so I am a little confused about something...when we talk of Rn I tend to think of it as the collection of all the dimensions , but now, I am questioning, because the matrix A (m by n) and I assume the n in the matrix , is only one number. So, doesn't that mean that the matrix A is saying that Rn is not the collection of all dimensions, but only a particular dimension (for them to match up nicely?)
(1 vote)
• You are in the right track in your latter argument. Formally speaking, R^n is the collection of all nx1 column vectors (aka n dimensional column vectors). As it turns out, not every "n dimensional thing" is a column vector with n different numbers. There are other things out there that are "n-dimensional objects", but are not column vectors.
I think where your initial confusion came from, is that we don't let n range over all possible values. For each problem, we fix n to be a single positive integer.
• I am awkward in English.
"" (r0 - r1) is C(At)..
why r0 - r1 is closed to C(At)?
• I am not sure that I have understood your question correctly, so I will try and re-state it as I understand it:
Why is (r0 - r1) an element of C(At)?

I won't go into depth, but refer you to Sal's videos where he does that already.
First, both r0 and r1 are column spaces - row spaces are just column spaces of a transposed matrix, so they are still column spaces)
Second, the column space is a sub space (see "Column Space of a Matrix ")
Third, sub spaces are closed under multiplication by a constant and also under addition. (see "Linear Subspaces ") Combining these two principles means that they are also closed under subtraction; take two elements, multiply one by -1 and then add them together.

Combine all of this together: r0 and r1 are in the same sub space, so r0 - r1 must produce another element in the same sub space. In this case, the sub space is C(At), so the new element is also a member of C(At).