## Notes on the Representation Theory of Finite Groups

By David K. Zhang

##### Under Construction!

Definition: Let be a group, and let be a vector space over a field A representation of on is a group homomorphism where denotes the group of invertible linear transformations

We can think of the representation as defining a left action of the group on the vector space given by

for each and (For brevity of notation, we will often write in place of ) Since is a homomorphism, we must have where denotes the identity element, and for all

Definition: The cardinal number is called the degree of the representation

In these notes, we will focus on the special case in which is a finite group and is a finite-dimensional vector space over the field of complex numbers. This case occurs frequently in physical and chemical applications. TODO: Add some motivation for studying representation theory.

In this special case, we can identify with the vector space of -tuples of complex numbers by choosing an ordered basis of to identify with the standard basis of The representation thus associates to each group element an invertible complex matrix (which depends implicitly on the choice of basis in ).

Definition: The character of the representation is the function defined by

for all (Recall that the trace of the matrix representation of a linear transformation is independent of the choice of basis.)

## 1. Group Actions

Definition: Let be a group, and let be an arbitrary set. An action of on is a function satisfying the following axioms:

1. for all and
2. for all where denotes the identity element of

Notation: We write as a shorthand for the statement is an action of on When the function is clear from context, it is common to suppress it notationally by writing or simply instead of In this case, we say that is a -set, and the preceding axioms take the following form:

1. for all and
2. for all where denotes the identity element of

Group actions arise whenever we want to think of a group as a collection of transformations acting on some set For example, the special orthogonal group (which consists of all real orthogonal matrices with determinant ) is naturally viewed as a collection of rotations acting on In this case, we have an obvious action given by the usual matrix-vector product.

The pervasiveness of group actions in mathematics and mathematical physics is not to be understated. Indeed, the very notion of a group is essentially an algebraic formalization of the idea of a collection of invertible transformations which is closed under composition. Thus, it is common for a group to be understood and characterized in terms of its actions.

This approach can provide significant conceptual advantages over attempting to understand a group through its internal algebraic structure. It is, for example, much easier to think of as the group of rigid rotations of Euclidean -space, as opposed to a collection of arrays of real numbers, constrained to satisfy certain algebraic identities and endowed with a mysterious rule of combination.

Suppose now that we are given a group action We can consider, for each , the function obtained by fixing the first argument of at and allowing the second to vary. Explicitly, we define

for all and Observe that each of the functions is a bijection, since by axioms 1 and 2, has an inverse given by This means that each is a simply a permutation of the elements of Indeed, we can equivalently think of the group action as an embedding of into the symmetric group of permutations of i.e., a group homomorphism given by

### 1.1. Orbits and Stabilizers

At this point, there are two natural questions we may wish to ask regarding the behavior of the action of a group on a particular element .

1. What elements of can be transformed into under the action of ?
2. Which elements of effect a transformation that leaves unchanged?

To explore these questions, we make the following definitions.

Definition: If is a -set and , then the orbit of under is the set

The stabilizer of under is the set

(These notations are unfortunately very similar, particularly in handwriting.)

It is easy to see that the stabilizer of any element is a subgroup of . Indeed, if two permutations and leave unchanged, then clearly their composition also leaves unchanged, and the inverse also leaves unchanged. For this reason, is often called the stabilizer subgroup or isotropy subgroup of with respect to .

It is also easy to see that the orbits of any two elements are either disjoint or identical; they cannot partially overlap. Indeed, if the intersection is nonempty, then there exist such that . It follows that we can write , and hence that

Thus, we see that the collection of all orbits of elements of , denoted by

forms a partition of the set

Having introduced orbits and stabilizers side-by-side, we hope that the reader is beginning to suspect some sort of connection between these two notions. Indeed, such connections are provided by the following two results:

Theorem 1. (Orbit-Stabilizer Theorem)
TODO: Write up the Orbit-Stabilizer Theorem and its proof.

Theorem 2. (Burnside's Lemma)
TODO: Write up Burnside's Lemma and its proof.

## 2. Vector Spaces

Definition: Let be a field. A vector space over consists of a group together with a binary operation called scalar multiplication, here written as that satisfies the following axioms:

1. for all where denotes the multiplicative identity element of
2. for all and
3. for all and
4. for all and

Notation: Here, we adopt the convention (common among physicists and engineers) that elements of referred to as vectors, are denoted by upright bold lowercase letters, while elements of referred to as scalars, are denoted by italic lowercase letters. Accordingly, we refer to the group operation of as vector addition, and we will write and for the additive identity elements of and respectively.

Observe that axioms 1 and 2 are precisely the conditions for to define a group action where denotes the multiplicative group of nonzero elements of Thus, the scalar multiplication can be thought of as a “multiplicative field action” of on which is required to distribute over addition in and (as stipulated in axioms 3 and 4).

Note that while we are writing the group operation of additively, we have not assumed that is abelian! Many textbook authors unnecessarily make this assumption, when in fact we can prove from axioms 1-4 that must be abelian.

Theorem: Let be a vector space over a field Then is an abelian group, where denotes vector addition.

Proof: Let Observe that

since by axioms 1, 3, and 4, both the LHS and RHS are equal to the vector By associativity, this implies

and by cancelling on the left and on the right, we obtain as desired. QED

This shows that the requirement of a “multiplicative field action” in the definition of a vector space is quite restrictive no non-abelian group admits one!

Examples of vector spaces:

1. Any field is trivially a vector space over itself. Simply take vector addition and scalar multiplication to be field addition and multiplication in . The vector space axioms then follow from the field axioms.

2. More generally, for any there is a natural vector space structure on , the set of all -tuples of elements of , with vector addition and scalar multiplication defined by

We will later see that this is an example of a vector space of dimension coinciding with the intuition that there are independent “degrees of freedom” given by modifying each component separately. The most familiar examples of vector spaces (as typically seen in undergraduate linear algebra) are obtained by taking or

Note that for , the vector space obtained from this construction is simply the singleton set , where is the empty tuple. This is called the trivial vector space over and is the unique vector space having dimension and cardinality

3. Even more generally, let be any set, and consider the set of all functions endowed with pointwise addition and scalar multiplication. This is naturally a vector space in the same way as the previous example; indeed, the previous example is just the special case of a finite indexing set .

Note that this vector space does not, in general, have dimension equal to the cardinality of This may seem counterintuitive, since a function has one “degree of freedom” for each element of . This is a subtlety in the way that dimension is defined for vector spaces, which we will return to later.

Definition: Let and be vector spaces over a common field . A function is called a linear transformation, linear map, or vector space homomorphism if it satisfies the following conditions:

1. for all .
2. for all and .

### 2.1. Vector Spaces over Subfields

Suppose is a vector space over a field , and let be a subfield of . Then by restricting the scalar multiplication function to , we obtain a vector space structure of over the smaller field .

Definition: Let be a vector space over and let be a subset of A linear combination of elements of is a vector of the form

where is a natural number, and If then we define the empty linear combination to have the value The span of is the set of all linear combinations of elements of

If we wish to be specific about the field from which the coefficients are drawn, we can refer to linear combinations as -linear combinations. For example, we might wish to take -linear combinations in a vector space over Note that our convention for the empty linear combination implies that the span of the empty set is the singleton

Definition: Let be a vector space over A subset of is said to be linearly independent if no linear combination of elements of equals except the trivial linear combination. In other words, for every and every pair of sequences and we have

only when Otherwise, if some nontrivial linear combination gives is said to be linearly dependent.

Note that any subset of containing is automatically linearly dependent. Moreover, the empty subset of any vector space is linearly independent, since the condition above is vacuously satisfied.

Definition: Let be a vector space over A basis of is a subset of which is linearly independent and whose span is equal to

Lemma: Let be a vector space over If a subset of is linearly independent, then for any vector lying outside the span of is linearly independent.

Proof: by contraposition. Suppose is linearly dependent. Then we can pick and such that

If then we have

showing that is linearly dependent. Otherwise, if then we can write

showing that lies in the span of QED

Theorem: Every vector space has a basis.

Proof: Let be a vector space over a field and consider the poset of linearly independent subsets of ordered by inclusion. We will proceed by applying Zorn's lemma to so we first verify that the hypotheses of Zorn's lemma hold. is clearly nonempty, since the empty set is vacuously a linearly independent subset of

Suppose that is a chain in indexed by some index set Let Clearly, is an upper bound of ; we claim that it is also linearly independent. Indeed, if were linearly dependent, then we could find a nontrivial linear combination of elements of equal to zero. But the vectors involved in such a linear combination would all have to be present in for some since there are only a finite number of such vectors. Thus, would be linearly dependent, contradicting the assumption that is a chain in

Zorn's lemma now guarantees the existence of a maximal element of We claim that must be a basis of Indeed, if the span of were not equal to then by the previous lemma, we could enlarge by adding a vector lying outside of its span. This would contradict the maximality of ; hence, is a basis of QED

Lemma: If a vector space has a finite basis, then all of its bases are finite and have the same size.

Proof: TO BE WRITTEN

Theorem: All bases of a given vector space have the same cardinality.

Proof: TO BE WRITTEN

Definition: The dimension of a vector space is the cardinality of one (and hence all) of its bases.