## Gaussian Integrals and Beyond

By David K. Zhang

## 1. Introduction: The Gaussian Integral

Let , and consider the Gaussian integral

The indefinite integral (antiderivative) of cannot be expressed in terms of elementary functions. However, we can still evaluate the definite integral by means of a clever trick. Observe that

This shows1 that . Note that is necessary to ensure convergence. (In fact, this integral converges absolutely if and only if .) In the following notes, we will explore a number of generalizations of this remarkable identity.

(1)

Exercise 1: Let , and complete the square in the exponent to show that

(2)

Show that the integral converges absolutely for any value of , so long as .

## 2. Polynomial Prefactors

Observe that by differentiating under the integral sign, we obtain the following identities:

Here, denotes the double factorial. This shows that

for all even integers . When is odd, we have by symmetry (since we are integrating an odd function over a symmetric domain). Thus, for all nonnegative integers , we have the following result:

(3)

Exercise 2: Using a similar technique to the one illustrated in this section, prove the identities

(4)

and

(5)

for all with .

## 3. A Linear Algebra Detour

Our goal for the rest of these notes will be to study some multidimensional generalizations of the Gaussian integral introduced in sections 1 and 2. We will take a short detour in this section to state and prove some useful linear algebra facts which will be needed later. We begin by specifying our notation.

Notation: We write and for the sets of real and complex matrices, respectively. If , then we write

• for the transpose of ,

• for the complex conjugate of (without transposition), and

• for the conjugate transpose of .

We also define these operations for vectors by regarding them as matrices. We will always treat vectors as column vectors, and indicate transposition explicitly if a row vector is desired.

If is a square matrix, then we write

• for the determinant of ,

• for the inverse of ,

• for the inverse of the transpose, or equivalently, the transpose of the inverse,

• for the inverse of the complex conjugate, or equivalently, the complex conjugate of the inverse, and

• for the inverse of the conjugate transpose, or equivalently, the conjugate transpose of the inverse.

These operations are also defined for by regarding as a subset of . In this case, complex conjugation is simply the identity operation.

### 3.1. The Kronecker Product

We now introduce a useful binary operation between matrices of arbitrary size.

Definition: Let and . The Kronecker product of and , denoted by , is the matrix given in block form by

where denotes the product of the scalar with the matrix .

For readers familiar with the notion of the tensor product of two vector spaces, we note that the Kronecker product has a natural algebraic interpretation in terms of the tensor product. If is the matrix representation of a linear map with respect to a pair of ordered bases

and is the matrix representation of a linear map with respect to a pair of ordered bases

then the Kronecker product is simply the matrix representation of the linear map with respect to the tensor product bases

ordered lexicographically by and . This interpretation makes a number of properties of the Kronecker product immediately clear:

• The Kronecker product is bilinear and associative, but (in general) not commutative.

• The Kronecker product commutes with inversion, transposition, complex conjugation, and conjugate transposition:

• The Kronecker product is compatible with the usual matrix product in the following sense:

this is sometimes called the mixed-product property of the Kronecker product.

Readers who are not familiar with tensor products can safely ignore the preceding discussion, provided they are willing to accept the preceding facts on faith.

### 3.2. Generalized Eigenvalues and Eigenvectors

Definition: Let . We call the equation

(6)

a generalized eigenvalue problem, or simply a generalized eigenproblem. If is a solution, then we call a generalized eigenvalue of with respect to , and the corresponding generalized eigenvector. In the special case where and are both Hermitian and is positive-definite, we call (6) a Hermitian-definite eigenproblem.

Note that if , the identity matrix, then (6) reduces to the standard eigenproblem for . The Hermitian-definite case is particularly well-behaved, and occurs most frequently in applications.

Theorem: Let be Hermitian matrices, and suppose is positive-definite. Then the Hermitian-definite eigenproblem admits a family of solutions having the following properties:

1. The eigenvalues are all real.

2. The eigenvectors form a basis of .

3. If , then . (We say that and are -orthogonal.)

Proof: Recall that a positive-definite matrix has a unique positive-definite square root. Using this fact, we can reduce the generalized eigenvalue problem to a standard eigenvalue problem, as follows:

Since is Hermitian, we now have a standard Hermitian eigenvalue problem in terms of . The theorem now follows by applying the spectral theorem to this standard eigenvalue problem. QED

Observe that by suitably normalizing the generalized eigenvectors , we can arrange that when and when . A set of vectors having this property is said to be -orthonormal. Thus, we can summarize the preceding discussion as follows:

All Hermitian-definite eigenproblems admit a -orthonormal basis of eigenvectors with real eigenvalues.

Corollary: Let be Hermitian matrices, and suppose is positive-definite. Then and are simultaneously diagonalized by some invertible matrix (i.e., both and are diagonal).

Proof: Let the columns of be a -orthogonal basis of generalized eigenvectors of with respect to . The preceding theorem guarantees the existence of such a basis. QED

Warning: The phrase “the matrix diagonalizes the matrix means is diagonal” to some authors and is diagonal” to others. We will always use the second definition, which is more useful for our purposes, but we note that in other contexts the first is often more natural.

## 4. Multiple Dimensions

Theorem: Let be a complex-symmetric matrix with positive-definite real part, and an arbitrary complex vector. Then

(7)

Remark: Note that the preceding formula contains only transposes, not conjugate-transposes. This is not a typo! In fact, when the transposes are replaced by conjugate-transposes, the formula becomes incorrect. Indeed, both sides of equation (7) should be holomorphic functions of the entries of and .

Proof: Let with symmetric and positive-definite. By section 3.2, there exists an invertible matrix simultaneously diagonalizing and . It follows that

where is diagonal. Note that the positive definiteness of guarantees that the diagonal entries of are positive. Thus, the diagonal entries of have positive real part.

Now, let . By performing a change of variable , we can write

This is the desired result. QED

Theorem: Let be a complex-symmetric matrix with positive-definite real part. Then for any and ,

Proof: As in the preceding proof, pick an invertible matrix such that is diagonal, and let and . Then by performing a change of variable , we can write

At this point, we consider separately the and terms.

• When , we want to evaluate the integral

Observe that each term with contributes a factor of

while the term contributes a factor of

Multiplying these factors gives the result

(See the previous proof for the evaluation of the product.)

• When , we want to evaluate the integral

Again, each term with contributes a factor of , while the terms contribute a factor of

Multiplying these factors gives the result

Returning to the original integral, we have

which is the desired result. QED

## 5. Vector-of-vectors Formalism

Let be a collection of -dimensional vectors. For reasons that will become clear later, we would like to think of these vectors as belonging to a single object , which we interpret as a “vector” whose components are themselves vectors. We denote such a vector-of-vectors by a boldface underlined letter , and we write to denote the set of -dimensional vectors whose components are -dimensional vectors.

We define the following operations on :

• scalar multiplication , defined for and by

• scalar dot product , defined for by

• vector dot product , defined for and by

• linear transformation , defined for and by

• scalar quadratic form , defined for and by

• vector quadratic form , defined for , , and by

These operations have straightforward generalizations to , as long as we are careful to distinguish between the transpose and the conjugate transpose .

In some circumstances, it will be more convenient to think of a vector-of-vectors in as an ordinary vector in . To make this correspondence explicit, we introduce the vectorization operator , which takes a vector-of-vectors and returns an ordinary vector obtained by stacking the components of on top of each other, with on top and on the bottom.

In other circumstances, it will be convenient to regard a vector-of-vectors in as a matrix in . To this end, we introduce the matrization operator , which transforms into a matrix with as its th row.

Using and , we can state the following equivalents of the operations on defined above. The equivalences are straightforward to verify, so detailed proofs will be omitted here.

(8)
(9)
(10)
(11)
(12)
(13)

## 6. Explicitly Correlated Gaussians

Let be complex-symmetric matrices with positive-definite real parts, and let be arbitrary complex vectors-of-vectors. Define

The functions are called explicitly correlated Gaussians (ECGs), and are extremely useful in variational calculations for quantum-mechanical few-body systems. The object of this section will be to evaluate the ECG matrix elements

of various differential operators . To do this, we will need to evaluate several integrals over . Our strategy will be to reduce these to equivalent integrals over using the vectorization identities stated in the previous section. Then, we can apply standard formulas for multidimensional Gaussian integrals.

### 6.1. Overlap Matrix Element

We begin by evaluating the overlap matrix element . This simply reduces to the multidimensional Gaussian integral after the application of a few vectorization identities. Using the substitution , we can write

### 6.2. Quadratic Form Matrix Element

We now evaluate the matrix element of a quadratic form for an arbitrary matrix . With an appropriate choice of , this can be used to evaluate the expectation values and of squared radial and interparticle distances, respectively.

### 6.3. Kinetic Energy Matrix Element

By choosing an appropriate “mass matrix” , the kinetic energy operator can be written as a quadratic form in terms of the vector momentum operator

Since we already know how to evaluate the matrix element of a quadratic form in position space, our strategy will be to take the Fourier transform of so that we can evaluate the integral

in momentum space.

Exercise 3: Let be a complex-symmetric matrix with positive-definite real part, and let be arbitrary. Define by

Show that the Fourier transform of is given by

Show that this can also be written as

and hence that the Fourier transform of an ECG in position space is an ECG in momentum space.

Exercise 4: Assuming the result of Exercise 3, verify that the inverse Fourier transform of gives back . That is,

For simplicity of notation, we will assume in the following derivation that and are complex-conjugated whenever they appear, suppressing the signs. Using the result of Exercise 3, we have

where we have defined and . We use the formula obtained in the previous section to integrate the quadratic form, giving

Observe that . This allows us to simplify the product of determinants:

The appearance of the term suggests that we might be able to factor out an overlap integral . This turns out to be a great idea, because it sets the ball rolling for a miraculous cancellation. We start by writing

Now, by expanding the and terms, it turns out that

But observe that , since

A similar argument shows that . This means that the entire exponential term vanishes!

At this point, we are only left with

which, by eliminating and , can equivalently be written as

### 6.4. Single-particle Density Matrix Element

The single-particle density operator determines the probability density (as a function of ) for the position of the particle with coordinates .

In the next-to-last step, we use the following computation to simplify the argument of the exponential. Let . Then by applying the mixed-product property of the Kronecker product, we have

Let .

### 6.5. Potential Energy Matrix Element

Suppose we can write the potential energy as a sum of terms of the form . By an appropriate choice of , this form can represent most potentials of practical interest, including central-force potentials and a two-body interactions. Recall that

This allows us to write

where as in the previous section. In general, this can be a difficult integral to evaluate. However, if , , and are assumed to be spherically symmetric (so that depends only on the magnitude of , and ), then the integral reduces to

Here, is the surface area of a sphere in with radius .

## 7. Summary of Results

For spherically symmetric (, ) ECGs:

1.In this document, denotes the usual branch of the square root function, with a branch cut across the negative real axis. This ensures that has nonnegative real part for any .