Projections

Given a vector space V, a projection is a map

P:VV

such that P2=P.
Pasted image 20211017184359.png
Roughly speaking, for a projection you not only need to specify the "target" subspace, but also a "direction" along which to project (the red vectors in the picture above). So, in some sense, a projection gives us a decomposition of the vector space V in horizontal and vertical subspace.

In finite dimensional vector spaces V, the subspaces U=Im(P) and W=Ker(P) satisfies:

x=Px(xPx)

Other properties of projections, valid even for infinite dimension, are:

When we work in a Hilbert space H instead of in a plain vector space V, we can say that P is an orthogonal projection if

Px,y=x,Py

that is, if it is a self adjoint operator.
They satisfy the following properties:

u,w=Pu,w=u,Pw

Reciprocally, any projection P such that U and W are orthogonal satisfies, assuming x=u1+w1 and y=u2+w2

Px,y=P(u1+w1),u2+w2=u1,u2

and

x,Py=u1+w1,P(u2+w2)=u1,u2

and therefore P is self-adjoint.

im(IP)=Ker(P)

Proof:
First,

(IP)2=I22P+P2=IP

And secondly, xH, (IP)(x)Ker(P) since

P(IP)(x)=PxPx=0

Pv2=Pv,Pv=Pv,vPvv

and so

Pvv

This is similar, in some sense, to saying that is continue.

Matrix Representation of Orthogonal Projections

Let’s start with the simplest case, where the subspace im(P)=UH has dimension 1. Let uH be a unit vector, i.e., u=1, and we want to project vectors onto the line spanned by u.
Then the orthogonal projection of any vector xH onto the subspace U=span{u} is:

P(x)=u,xu

This can be written compactly as:

P=uu

or, in Dirac notation:

P=|uu|

Here:

(abc)(abc)

Now, suppose we have a k-dimensional subspace UH, and we choose an orthonormal basis {u1,u2,,uk} for U.
Let’s form a matrix A whose columns are the basis vectors:

A=[||u1uk||](an n×k matrix)

Note: Each uiHCn (or Rn).

Then the orthogonal projection operator P onto U is given by:

P=AA

The intuition is: