If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Preimage and kernel example

Example involving the preimage of a set under a transformation. Definition of kernel of a transformation. Created by Sal Khan.

## Want to join the conversation?

• Is there a difference between kernel and nullspace? They seem to be defined in the same way (Av=0) •   yes. the nullspace is something general to matrices. the kernel is related to transformations of vectors. the kernel of the transformation is the same as the nullspace of the transformation matrix
• why doesn't multiplying by the inverse matrix work here? if you multiply both sides of the transformation equation where the product is the zero vector, you get x1, x2 = 0 and miss out on the -3,1 solution... •  It doesn't work because the inverse matrix does not exist. Note this in two ways-- the column vectors of A are multiples of one another, hence not linearly independent. Secondly, the determinant of the matrix is 0, so no inverse can exist. Also, a matrix that is invertible is also one to one-- so if you have two solutions mapped to zero, ie the kernel is not just the zero vector, then something is wrong.

Hope that helps.
• If you are one of those who has a hard time understanding what kernel or nullspace REALLY means, I found a more intuitive understanding.

We all know that null space of a matrix is the set of solutions to the homogeneous equation Ax = 0. But I found it really helpful when I saw it this way:

Suppose T(x) = Ax, then the kernel of this transformation is the set of vectors that are SENT to the zero vector by T. In other words, it is the set of vectors that LOSE THEIR IDENTITY because of T. • At Sal says that all linear transformations can be written as matrix multiplication problems, but my linear algebra professor says that this is only the case when you're going from Rn to Rm. My professor says that, technically, the derivative and the integral are linear transformations that can't be written as matrix multiplication. Who's right? • Regarding the two blue and orange lines that map to two points: isn't it like you'd be looking in a computer game at a line that basically maps onto the screen in a single pixel, because it's "perpendicular to the screen"?

Of course, that would be a mapping of R^3 -> R^2, but still, the idea is the same, right? • I still don't get it... How do I visually think of images and preimages??
(1 vote) • Say I have an object, and I transform it with T. This creates a new object, which we call the image.
Physically, T might be a carnival mirror which makes you look tall and skinny, and the object in question would be yourself. In the mirror you see a transformed version of yourself which is tall and skinny. This transformed version of yourself is the image of yourself under T.

A preimage takes finds all the possible objects which can create a given image (or set of images) when placed under the transformation T.
Physically, consider T as casting a shadow, with shadows being the codomain. If you look at a shadow, you can then consider what objects might cast that shadow. The set of all the possible objects which can create a given shadow is the preimage of that shadow under T.
• At , wouldn't the solution to [x1,x2]=[1,0]+t[-3,1] be the set of vectors that, in the graph, fill the space between the two parallel lines Sal drew? We are adding the vector t[-3,1] to [1,0], which graphically would mean that the sum of these two vectors would start at the origin and end on a point on the 'orange' line (i.e. t[-3,1] shifted by 1 unit on the horizontal axis.). What am I missing? • Yes, all those vectors go from the origin to the line y = 1/3 - x/3, as you say.

What I noticed, though, is that all the end points of those vectors "(1, 0) + t(-3, 1)" are actually on that line. He doesn't seem to realise this, but I think that the lines he's talking about are actually the loci of the endpoints of these vectors.

And in general, all the geometric entities that he describes with vectors (like the definition of a plane we studied in a previous video) are loci of endpoints of vectors, not spans of vectors.  • 