Hilbert space: treating functions as vectors





tag
Mathematic

The tools of linear algebra are extremely useful when working in (e.g.) Euclidean spaces. Wouldn’t it be great if we could apply these tools to additional mathematical structures, like functions and sequences? hilbert space Allows us to do exactly that – apply linear algebra to functions.

Intuition – functions as infinite-dimensional vectors

There are many ways to view vectors; A standard interpretation is an ordered list of numbers. Let’s take a vector as an example:

It is a list of three numbers, where each number has an index. 1.4, 4.2 etc. Another way to think about a vector is a CelebrationIn the strict mathematical sense. A vector is a function with domain (index) and codomain. \mathbb{r}Or:

Now imagine that our vector is n-dimensional:. Using function notation we can write. This works for any n, and in fact it also works for n infinite N. Our vector then becomes a function from natural numbers to real numbers:.

But we can take it even further; What if we allow any real number as an index? Our vector is then, or to be more familiar we can rename it:. This “vector” is simply a function from the real to the real.

Although we cannot write out all the elements explicitly (their number is infinite, and most indices are illogical with no even finite representation), we can instead come up with a rule that maps an index to an element. For example: There is such a rule. for a given index xThis value specifies . We’re not used to thinking of functions as vectors, but if we carefully expand on some definitions, it’s entirely possible!

Therefore, functions can be viewed as vectors with infinite dimensions. The next step is to see how we can define this vector space For tasks.

Functions form a vector space

Functions, together with standard addition and scalar multiplication operations, form a vector space.

For complete comprehensiveness, let’s x Let be any set and (either real or complex numbers). let V Be the set of all function mappings. For a number, we define the functions addition and scalar multiplication as follows:

Then V Together these operations form a vector space. For the proof, see Appendix A.

square integral function

A vector space is useful, but to be able to access Hilbert spaces and perform more interesting operations on functions, we need some additional structure.

From here, we’ll switch to complex-valued functions (real-valued functions are just a special case). a function is called square integer If:

The set of such functions is usually denoted, and it forms subspace The vector space we discussed in the previous section (for a proof, see Appendix B).

The integral over the square of the function is equivalent to the Euclidean norm for vectors; Intuitively, it serves as a measure
LengthWhich is a term used in vectors. For functions, it is usually called energy ,

Internal Products and Ideals

To add more tools from the linear algebra toolbox, let’s define an inner product:

Why is it defined this way? Here is the definition of the inner product between two n-dimensional vectors with complex values:

Sound familiar? The function version is a generalization of this sum over an infinite range (the entire x-axis, if you prefer) using an integral.

As a next step, we want to show that this is a internal product spaceWhen taken with the inner product operation as defined above. To make this true, first and foremost we need to show that the inner product is finite for each pair of functions (if the integral does not converge, this is not something we can work with). This can be done using the integral form of the Cauchy–Schwarz inequality:

Since, the right hand side is finite, and hence the inner product is also finite. This is where quadratic integrability of functions comes into play – without being quadratic integrable, it would be impossible to define the inner product.

Other properties of interior products can also be easily demonstrated, and there are plenty of resources online that show how.

Therefore, the inner product operations shown here together form an inner product space.

This inner product can be used to define Ideal For our location:

Once again, because our functions are class integers, the norm exists and it is easy to show that it satisfies all the usual requirements for the norm.

Are we Hilbert yet?

We have seen that the set of square integrable functions forms a proper vector space and together with an inner product operation also forms an inner product space; This also has a standard. So does it have everything you need for linear algebra?

About. This place should also be there CompleteThe word “complete” is highly overloaded in mathematics, so it’s important to say what it means in this context: Simply put, it means that there are no “holes” in the set – no sequence of elements in the set converges to an element outside the set, To put it in less simple terms, a space is complete if all Cauchy sequences of elements of this space are collected,

This takes us deeper into the larger and more advanced topic of real analysis. The Riess–Fisher theorem shows that this is complete.

Once we add completeness to the set of properties, it becomes a hilbert space,

Mugshot of David Hilbert's saying "Wow"

You may also hear the term Banach space mentioned in this context. Banach spaces are more general than Hilbert spaces: a complete space with a norm is a Banach space (the norm need not come from an inner product). A complete inner-product space is a Hilbert space – the norm of a Hilbert space is defined using its inner product, as we have seen above.

Application: Generalized Fourier Series

Fourier series is one of the most brilliant and consequential ideas in mathematics. I’d really like to delve deeper into this topic, but that would require a post (or a book) of its own.

In short, Fourier series can be defined for functions as they form a Hilbert space. In particular, the inner product for functions lets us define orthogonality and the concept of basis vectors In . These are then used to express any function as a weighted sum of a series of basis functions spanning space. Furthermore, the completeness of the space guarantees that the Fourier series actually converge to functions within the space.

Interestingly, Fourier put forward his ideas decades before the field of analysis matured and defined the Hilbert space. This is why many mathematicians of the time (notably Lagrange) objected to Fourier theory as not being rigorous enough. However, the theory worked brilliantly for many useful scenarios, and later developments in functional analysis helped put it on a more solid theoretical basis.

Another related example that I love: I’ve mentioned how this theory helps us apply the tools of linear algebra to functions, and generalized Fourier series provide an excellent illustration.

Most people are familiar with trigonometric Fourier series, but the principle is more general and applies to any set of mutually orthogonal functions that form the basis for a vector space. Is there a polynomial Fourier series? Yes, and it can be achieved using one of the classical tools of linear algebra – the Gram-Schmidt procedure. The result is the Legendre polynomial.

Again, this is all interesting and I hope I can write more on this topic in the future.

Application: Quantum Mechanics

In QM, the states of particles are described by wave functions in Hilbert space. The inner product can be interpreted as a probability. QM operators can be viewed as linear maps on that space. This lets us apply linear algebra in infinite dimensions and opens up a wealth of useful mathematical tools.

Appendix A: Proof of vector space axioms for functions

As a reminder, we are dealing with sets of functions, where x any is a set and any can be \mathbb{r} Or . this set VTogether with addition and scalar multiplication between the set members, they form a vector space. To prove this, we prove all the vector space axioms:

associativity of vector addition,

This proceeds very easily because addition of real or complex numbers is associative, commutative, etc.

commutativity of vector addition,

identity element of vector addition,

The function acts as an additive identity element:

inverse element of vector addition,

We will define the additive inverse as, and as before:

associativity of scalar multiplication,

For a scalar quantity:

identity element of scalar multiplication,

We will use the scalar 1 as the identity element of scalar multiplication. Since the results of f(x) is a real or complex scalar, it is trivially true that:

Distributivity of scalar multiplication on vector addition,

Distributivity of scalar multiplication on scalar addition,

For scalar and :

Appendix B: Proof that square integrable functions form a subspace

To show that this function is a subspace of a vector space, we need to prove the following properties:

zero element

The zero element is in:

closed under joint

Remember that our actions are complex-valued. For any two complex numbers (see this post):

This is very easy to show, and that too. so:

Armed with this, let’s check out what sums f(x) And the square integer is:

Since the values f(x) And these are just complex numbers, we will use the inequality shown above to write:

Both integrals on the right-hand side are finite, so the one on the left-hand side is also finite.

Termination under scalar multiplication

Given and a scalar:




Leave a Comment