Classical probability theory

In a classical probabilistic space you have events and a probability measure:

(Ω,Σ,P).

But you want numerical data, so you study random variables: functions

f:ΩC

such that

f1(A)Σ

for every open set A={zC:a<Re[z]<b} and

supΩ{f(ω)}<.

They constitute the algebra L(Ω,Σ,P), and the expected value for any random variable g, defined by

EP[g]=Ωg(ω)P(ω)

plays the role of a linear functional

EP:L(Ω,Σ,P)C

In fact, it is not only an algebra, but a commutative von Neumann algebra, and EP is what it is called a faithful, normal state (see Quantum Probability Theory, from Hans Maasen, for details).
There is a theorem called the Gelfand-Naimark theorem which applied to a von Neumann algebra with a normal, faithful state let us recover the original probabilistic space. That is to say: all the data is inside the von Neumann algebra and the linear functional.

Sketch of the construction: if we begin with an algebra A (it could be L(Ω,Σ,P) or not) we recover a σ-algebra Σ by selecting all the pA such that p2=p=p. Think if pL is like that, p(ω)=0 or p(ω)=1, and p works like and indicator function of a set ApΣ. The inclusion relation in Σ is recovered by means of the relation: pq iff pq=p. And so we can recover the elementary events and the sample space Ω.

Related: