Math Wiki
Advertisement

Numbers[]

Scalars[]

See also: Peano axioms, *Hyperoperation, *Algebraic extension

The basis of all of mathematics is the *"Next" function. See Graph theory.

Next(0)=1
Next(1)=2
Next(2)=3
Next(3)=4
Next(4)=5

We might express this by saying that One differs from nothing as two differs from one. This defines the Natural numbers (denoted ). Natural numbers are those used for counting.

These have the convenient property of being transitive. That means that if a<b and b<c then it follows that a<c. In fact they are totally ordered. See *Order theory.

Back to top

Integers[]

Addition (See Tutorial:arithmetic) is defined as repeatedly calling the Next function, and its inverse is subtraction. But this leads to the ability to write equations like for which there is no answer among natural numbers. To provide an answer mathematicians generalize to the set of all integers (denoted because zahlen means count in german) which includes negative integers.

The Additive identity is zero because x + 0 = x.
The absolute value or modulus of x is defined as
*Integers form a ring (denoted ) over the field of rational numbers. Ring is defined below.
Zn or is used to denote the set of *integers modulo n .
*Modular arithmetic is essentially arithmetic in the quotient ring Z/nZ (which has n elements).
Consider the ring of integers Z and the ideal of even numbers, denoted by 2Z. Then the quotient ring Z / 2Z has only two elements, zero for the even numbers and one for the odd numbers; applying the definition, [z] = z + 2Z := {z + 2y: 2y ∈ 2Z}, where 2Z is the ideal of even numbers. It is naturally isomorphic to the finite field with two elements, F2. Intuitively: if you think of all the even numbers as 0, then every integer is either 0 (if it is even) or 1 (if it is odd and therefore differs from an even number by 1).
An *ideal is a special subset of a ring. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3.
A *principal ideal is an ideal in a ring that is generated by a single element of through multiplication by every element of .
A *prime ideal is a subset of a ring that shares many important properties of a prime number in the ring of integers. The prime ideals for the integers are the sets that contain all the multiples of a given prime number.
The study of integers is called Number theory.
means a divides b.
means a does not divide b.
means pa exactly divides n (i.e. pa divides n but pa+1 does not).
A prime number is a number that can only be divided by itself and one.
If a, b, c, and d are primes and x=abc and y=c2d then:
xy = lcm * gcd = abc2d * c
Two integers a and b are said to be relatively prime, mutually prime, or coprime if the only positive integer that divides both of them is 1. Any prime number that divides one does not divide the other. This is equivalent to their greatest common divisor (gcd) being 1.
(See Tutorial:least common multiples)


Ulam 1

The *Ulam spiral. Black pixels = *prime numbers.


Back to top

Rational numbers[]

Multiplication (See Tutorial:multiplication) is defined as repeated addition, and its inverse is division. But this leads to equations like for which there is no answer. The solution is to generalize to the set of rational numbers (denoted ) which include fractions (See Tutorial:fractions). Any number which isnt rational is irrational. See also *p-adic number

The set of all rational numbers except zero forms a *multiplicative group which is a set of invertible elements.
Rational numbers form a *division algebra because every non-zero element has an inverse. The ability to find the inverse of every element turns out to be quite useful. A great deal of time and effort has been spent trying to find division algebras.
Rational numbers form a Field. A field is a set on which addition, subtraction, multiplication, and division are defined. See below.
The Multiplicative identity is one because x * 1 = x.
Division by zero is undefined and undefinable. 1/0 exists nowhere on the complex plane. It does, however, exist on the Riemann sphere (often called the extended complex plane) where it is surprisingly well behaved. See also *Wheel theory and L'Hôpital's rule.
(Addition and multiplication are fast but division is slow *even for computers.)
Binary multiplication

From Wikipedia:Binary number

0 0
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
9 1001
10 1010
11 1011
12 1100
13 1101
14 1110
15 1111

The binary numbers 101 and 110 are multiplied as follows:

         1 0 1      (5 in decimal)
       × 1 1 0      (6 in decimal)
       --------
         0 0 0    
   +   1 0 1      
   + 1 0 1 
   ------------
   = 1 1 1 1 0      (30 in binary)

Binary numbers can also be multiplied with bits after a *binary point:

             1 0 1 . 1 0 1       (5.625 in decimal)
           × 1 1 0 . 0 1         (6.25  in decimal)
           -------------------
                 1 . 0 1 1 0 1   
   +           0 0 . 0 0 0 0     
   +         0 0 0 . 0 0 0
   +       1 0 1 1 . 0 1
   +     1 0 1 1 0 . 1
   ---------------------------
   =   1 0 0 0 1 1 . 0 0 1 0 1  (35.15625 in decimal)


From Wikipedia:Power of two

21 = 2
22 = 4
24 = 16
28 = 256
216 = 65,536
232 = 4,294,967,296
264 = 18,446,744,073,709,551,616 (20 digits)
2128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (39 digits)

Our universe is tiny. Starting with only 2 people and doubling the population every 100 years will in only 27,000 years result in enough people to completely fill the observable universe.

Back to top

Irrational and complex numbers[]

Exponentiation (See Tutorial:exponents) is defined as repeated multiplication, and its inverses are roots and logarithms. But this leads to multiple equations with no solutions:

Equations like The solution is to generalize to the set of algebraic numbers (denoted ). (See also *algebraic integer and algebraically closed.) To see a proof that the square root of two is irrational see Square root of 2.
Equations like The solution (because x is transcendental) is to generalize to the set of Real numbers (denoted ).
A plus bi
Equations like and The solution is to generalize to the set of complex numbers (denoted ) by defining i = sqrt(-1). A single complex number consists of a real part a and an imaginary part bi (See Tutorial:complex numbers). Imaginary numbers (denoted ) often occur in equations involving change with respect to time. If friction is resistance to motion then imaginary friction would be resistance to change of motion wrt time. (In other words, imaginary friction would be mass.) In fact, in the equation for the Spacetime interval (given below), *time itself is an imaginary quantity.
The Complex conjugate of the complex number is (Not to be confused with the dual of a vector.)
Complex numbers form an *Algebra over a field (K-algebra) because complex multiplication is *Bilinear.
The complex numbers are not ordered. However the absolute value or *modulus of a complex number is:
A Gaussian integer a + bi is a Gaussian prime if and only if either:
  • one of a, b is zero and absolute value of the other is a prime number of the form 4n + 3 (with n a nonnegative integer), or
  • both are nonzero and a2 + b2 is a prime number (which will not be of the form 4n + 3).
There are n solutions of
0^0 = 1. See Empty product.

Back to top

Hypercomplex numbers[]

Complex numbers can be used to represent and perform rotations but only in 2 dimensions. Hypercomplex numbers like quaternions (denoted ), octonions (denoted ), and *sedenions (denoted ) are one way to generalize complex numbers to some (but not all) higher dimensions.

A quaternion can be thought of as a complex number whose coefficients are themselves complex numbers (hence a hypercomplex number).

Where

and

Any real finite-dimensional *division algebra over the reals must be:[1]

The following is known about the dimension of a finite-dimensional division algebra A over a field K:

  • dim A = 1 if K is algebraically closed,
  • dim A = 1, 2, 4 or 8 if K is *real closed, and
  • If K is neither algebraically nor real closed, then there are infinitely many dimensions in which there exist division algebras over K.

*Split-complex numbers (hyperbolic complex numbers) are similar to complex numbers except that i2 = +1.

Back to top

Tetration[]

Tetration is defined as repeated exponentiation and its inverses are called super-root and super-logarithm.

Back to top

Hyperreal numbers[]

See also: *Non-standard calculus

When a quantity, like the charge of a single electron, becomes so small that it is insignificant we, quite justifiably, treat it as though it were zero. A quantity that can be treated as though it were zero, even though it very definitely is not, is called infinitesimal. If is a finite amount of charge then using Leibniz's notation would be an infinitesimal amount of charge. See Differential

Likewise when a quantity becomes so large that a regular finite quantity becomes insignificant then we call it infinite. We would say that the mass of the ocean is infinite . But compared to the mass of the Milky Way galaxy our ocean is insignificant. So we would say the mass of the Galaxy is doubly infinite .

Infinity and the infinitesimal are called Hyperreal numbers (denoted ). Hyperreals behave, in every way, exactly like real numbers. For example, is exactly twice as big as In reality, the mass of the ocean is a real number so it is hardly surprising that it behaves like one. See *Epsilon numbers and *Big O notation

In ancient times infinity was called the "all".

Back to top

Groups and rings[]

Main articles: Algebraic structure, Abstract algebra, and *group theory

Addition and multiplication can be generalized in so many ways that mathematicians have created a whole system just to categorize them.

Function-x

Any straight line through the origin forms a group. Adding any 2 points on the line results in a 3rd point that is also on the line.

A *magma is a set with a single *closed binary operation (usually, *but not always, addition. See *Additive group).

a + b = c

A *semigroup is a magma where the addition is associative. See also *Semigroupoid

a + (b + c) = (a + b) + c

A *monoid is a semigroup with an additive identity element.

a + 0 = a

A *group is a monoid with additive inverse elements.

a + (-a) = 0

An *abelian group is a group where the addition is commutative.

a + b = b + a


A *pseudo-ring is an abelian group that also has a second closed, associative, binary operation (usually, but not always, multiplication).

a * (b * c) = (a * b) * c
And these two operations satisfy a distribution law.
a(b + c) = ab + ac

A *ring is a pseudo-ring that has a multiplicative identity

a * 1 = a

A *commutative ring is a ring where multiplication commutes, (e.g. *integers)

a * b = b * a

A *field is a commutative ring where every element has a multiplicative inverse (and thus there is a multiplicative identity),

a * (1/a) = 1
The existence of a multiplicative inverse for every nonzero element automatically implies that there are no *zero divisors in a field
if ab=0 for some a≠0, then we must have b=0 (we call this having no zero-divisors).


is the *quotient ring of by the ideal containing all integers divisible by n.

Thus is a field when is a *maximal ideal, that is, when n is prime.

The *center of a *group is the commutative subgroup of elements c such that c+x = x+c for every x. See also: *Centralizer and normalizer.

The *center of a *noncommutative ring is the commutative subring of elements c such that cx = xc for every x.

The *characteristic of ring R, denoted char(R), is the number of times one must add the *multiplicative identity to get the *additive identity.

Circle as Lie group

The circle of center 0 and radius 1 in the complex plane is a Lie group with complex multiplication.

A *Lie group is a group that is also a smooth differentiable manifold, in which the group operation is multiplication rather than addition.[2] (Differentiation requires the ability to multiply and divide which is usually impossible with most groups.)

All non-zero *nilpotent elements are *zero divisors.

The square matrix is nilpotent

Back to top

Numbers dont lie. (But they sure help)[]

From Wikipedia:Mathematical fallacy:

The fallacy is in line 5: the progression from line 4 to line 5 involves division by a − b, which is zero since a = b. Since division by zero is undefined, the argument is invalid.

Back to top

Intervals[]

[-2,5[ or [-2,5) denotes the interval from -2 to 5, including -2 but excluding 5.
[3..7] denotes all integers from 3 to 7.
The set of all reals is unbounded at both ends.
An open interval does not include its endpoints.
*Compactness is a property that generalizes the notion of a subset being closed and bounded.
The *unit interval is the closed interval [0,1]. It is often denoted I.
The *unit square is a square whose sides have length 1.
Often, "the" unit square refers specifically to the square in the Cartesian plane with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
The *unit disk in the complex plane is the set of all complex numbers of absolute value less than one and is often denoted

Back to top

Vectors[]

See also: *Algebraic geometry, *Algebraic variety, *Scheme, *Algebraic manifold, and Linear algebra

The one dimensional number line can be generalized to a multidimensional Cartesian coordinate system thereby creating multidimensional math (i.e. geometry). See also *Curvilinear coordinates

For sets A and B, the Cartesian product A × B is the set of all ordered pairs (a, b) where aA and bB.[3]

is the Cartesian product
is the Cartesian product (See *Complexification)
The *direct product generalizes the Cartesian product.

(See also *Direct sum)

If we think of as the *set of real numbers, then the direct product is precisely just the Cartesian product.
If we think of as the *group of real numbers under addition, then the direct product still consists of the Cartesian product. The difference is that is now a group. We have to also say how to add their elements.
To add ordered pairs, we define the sum to be .
However, if we think of as the *field of real numbers, then the direct product does not exist – naively defining in a similar manner to the above examples would not result in a field since the element does not have a multiplicative inverse.
If the arithmetic operation is written as +, as it usually is in abelian groups, then we use the *direct sum. If the arithmetic operation is written as × or ⋅ or using juxtaposition (as in the expression ) we use direct product. In the case of two summands, or any finite number of summands, the direct sum is the same as the direct product.

A vector space is a coordinate space with vector addition and scalar multiplication (multiplication of a vector and a scalar belonging to a field).

3D Vector

i, j, and k are basis vectors
a = axi + ayj + azk

If are orthogonal unit *basis vectors
and are arbitrary vectors
and are scalars belonging to a field then we can (and usually do) write:
See also: Linear independence
A *module generalizes a vector space by allowing multiplication of a vector and a scalar belonging to a ring.

Coordinate systems define the length of vectors parallel to one of the axes but leave all other lengths undefined. This concept of "length" which only works for certain vectors is generalized as the "norm" which works for all vectors. The norm of vector is denoted The double bars are used to avoid confusion with the absolute value of the function.

Taxicab metric (called L1 norm. See *Lp space. Sometimes called Lebesgue spaces. See also Lebesgue measure.) A circle in L1 space is shaped like a diamond.
Pythagoras (2)

c² = (a+b)² - 4ab/2
c² = a² + b²

In Euclidean space the norm (called L2 norm) doesnt depend on the choice of coordinate system. As a result, rigid objects can rotate in Euclidean space. See proof of the Pythagorean theorem to the right. L2 is the only *Hilbert space among Lp spaces.
In Minkowski space (See *Pseudo-Euclidean space) the Spacetime interval is
In *complex space the most common norm of an n dimensional vector is obtained by treating it as though it were a regular real valued 2n dimensional vector in Euclidean space
Infinity norm. (In this space a circle is shaped like a square.)
A *Banach space is a *normed vector space that is also a complete metric space (there are no points missing from it).
Manifolds
Tangent bundle

Tangent bundle of a circle

A manifold is a type of topological space in which each point has an infinitely small neighbourhood that is homeomorphic to Euclidean space. A manifold is locally, but not globally, Euclidean. A *Riemannian metric on a manifold allows distances and angles to be measured.

A *Tangent space is the set of all vectors tangent to at point p.
Informally, a *tangent bundle (red cylinder in image to the right) on a differentiable manifold (blue circle) is obtained by joining all the *tangent spaces (red lines) together in a smooth and non-overlapping manner.[4] The tangent bundle always has twice as many dimensions as the original manifold.
A *vector bundle is the same thing minus the requirement that it be tangent.
A *fiber bundle is the same thing minus the requirement that the fibers be vector spaces.
The *cotangent bundle (*Dual bundle) of a differentiable manifold is obtained by joining all the *cotangent spaces (pseudovector spaces).
The cotangent bundle always has twice as many dimensions as the original manifold.
Sections of that bundle are known as differential one-forms.
Circle as Lie group

The circle of center 0 and radius 1 in the *complex plane is a Lie group with complex multiplication.

A *Lie group is a group that is also a finite-dimensional smooth manifold, in which the group operation is multiplication rather than addition.[5] *n×n invertible matrices (See below) are a Lie group.

A *Lie algebra (See *Infinitesimal transformation) is a local or linearized version of a Lie group.
The Lie derivative generalizes the Lie bracket which generalizes the wedge product which is a generalization of the cross product which only works in 3 dimensions.

Back to top

Spaces[]

Mathematical implication diagram-alt-large-print

An arrow from A to B means that space A is also a kind of space B.

Around 1735, Euler discovered the formula relating the number of vertices, edges and faces of a convex polyhedron, and hence of a *planar graph. No metric is required to prove this formula. The study and generalization of this formula is the origin of topology.

A topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.[6]

Mathematical Spaces

Hierarchy of mathematical spaces.

The metric is a function that defines a concept of distance between any two points. The distance from a point to itself is zero. The distance between two distinct points is positive.

1.
2. iff
3.
4.

A norm is the generalization to real vector spaces of the intuitive notion of distance in the real world. All norms on a finite-dimensional vector space are equivalent from a topological viewpoint as they induce the same topology (although the resulting metric spaces need not be the same).[7]

A norm is a function that assigns a strictly positive length or size to each vector in a vector space—except for the zero vector, which is assigned a length of zero. [8]

  1. Failed to parse (syntax error): {\displaystyle \|\mathbf{v}\| ≥ 0}
  2. iff   (the zero vector)
  3. Failed to parse (syntax error): {\displaystyle \|\mathbf{u} + \mathbf{v}\| ≤ \|\mathbf{u}\| + \|\mathbf{v}\| \quad} (The *Triangle inequality)

A seminorm, on the other hand, is allowed to assign zero length to some non-zero vectors (in addition to the zero vector).[9]

From Wikipedia:List of vector spaces in mathematics:



Back to top

Multiplication of vectors[]

Multiplication can be generalized to allow for multiplication of vectors in 3 different ways:

Dot product[]

Dot product (a Scalar):

Strangely, only parallel components multiply.
The dot product can be generalized to the bilinear form where A is an (0,2) tensor. (For the dot product in Euclidean space A is the identity tensor. But in Minkowski space A is the *Minkowski metric).
Two vectors are orthogonal if
A bilinear form is symmetric if
Its associated *quadratic form is
In Euclidean space
A nondegenerate bilinear form is one for which the associated matrix is invertible (its determinate is not zero)
for all v implies that u = 0.
The inner product is a generalization of the dot product to complex vector space.
(See *Bra–ket notation.)
The inner product can be generalized to a sesquilinear form
A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × VC such that[10]
A is a *Hermitian operator iff Often written as
The curl operator, is Hermitian.
A *Hilbert space is an inner product space that is also a Complete metric space.
The inner product of 2 functions and between and is
If this is equal to 0, the functions are said to be orthogonal on the interval. Unlike with vectors, this has no geometric significance but this definition is useful in *Fourier analysis. See below.

Back to top

Outer product[]

Outer product (a tensor called a dyadic):

As one would expect, every component of one vector multipies with every component of the other vector.
For complex vectors, it is customary to use the conjugate transpose of v (denoted vH or v*):[11]
Taking the dot product of uv and any vector x (See Visualization of Tensor multiplication) causes the components of x not pointing in the direction of v to become zero. What remains is then rotated from v to u. Therefore an outer product rotates one component of a vector and causes all other components to become zero.
To rotate a vector with 2 components you need the sum of at least 2 outer products (a bivector). But this is still not perfect. Any 3rd component not in the plane of rotation will become zero.
A true 3 dimensional rotation matrix can be constructed by summing three outer products. The first two sum to form a bivector. The third one rotates the axis of rotation zero degrees but is necessary to prevent that dimension from being squashed to nothing.
The Tensor product generalizes the outer product.

Back to top

Geometric product[]

The geometric product will be explained in detail below.

Wedge product[]

Exterior calc cross product

A unit vector and a unit bivector are shown in red

Wedge product (a simple bivector):

The wedge product of 2 vectors is equal to the *geometric product minus the inner product as will be explained in detail below.
The wedge product is also called the exterior product (sometimes mistakenly called the outer product).
The term "exterior" comes from the exterior product of two vectors not being a vector.
Just as a vector has length and direction so a bivector has an area and an orientation.
In three dimensions is the dual of the cross product which is a pseudovector.
Exterior calc triple product

The magnitude of a∧b∧c equals the volume of the parallelepiped.

The triple product a∧b∧c is a trivector which is a 3rd degree tensor.
In 3 dimensions a trivector is a pseudoscalar so in 3 dimensions every trivector can be represented as a scalar times the unit trivector. See Levi-Civita symbol
The dual of vector a is bivector ā:

Back to top

Covectors[]

The Mississippi flows at about 3 km per hour. Km per hour has both direction and magnitude and is a vector.

The Mississippi flows downhill about one foot per km. Feet per km has direction and magnitude but is not a vector. Its a covector.

The difference between a vector and a covector becomes apparent when doing a changing units. If we measured in meters instead of km then 3 km per hour become 3000 meters per hour. The numerical value increases. Vectors are therefore contravariant.

But 1 Foot per km becomes 0.001 foot per meter. The numerical value decreases. Covectors are therefore covariant.

Tensors are more complicated. They can be part contravariant and part covariant.

A (1,1) Tensor is one part contravariant and one part covariant. It is totally unaffected by a change of units. It is these that we will study in the next section.

Back to top

Tensors[]

See also: *Matrix norm and *Tensor contraction
External links: Review of Linear Algebra and High-Order Tensors
Tensor components explained

Just as a vector is a sum of unit vectors multiplied by constants so a tensor is a sum of unit dyadics () multiplied by constants. Each dyadic is associated with a certain plane segment having a certain orientation and magnitude. (But a dyadic is not the same thing as a bivector.)

A simple tensor is a tensor that can be written as a product of tensors of the form (See Outer Product above.) The rank of a tensor T is the minimum number of simple tensors that sum to T.[12] A bivector is a tensor of rank 2.

The order or degree of the tensor is the dimension of the tensor which is the total number of indices required to identify each component uniquely.[13] A vector is a 1st-order tensor.

Complex numbers can be used to represent and perform rotations but only in 2 dimensions.

Tensors, on the other hand, can be used in any number of dimensions to represent and perform rotations and other linear transformations. See the image to the right.

Any affine transformation is equivalent to a linear transformation followed by a translation of the origin. (The origin is always a fixed point for any linear transformation.) "Translation" is just a fancy word for "move".

Multiplying a tensor and a vector results in a new vector that can not only have a different magnitude but can even point in a completely different direction:

Some special cases:

One can also multiply a tensor with another tensor. Each column of the second tensor is transformed exactly as a vector would be.

And we can also switch things around using a *Permutation matrix. (See also *Permutation group):

Matrices do not in general commute:

but

The Determinant of a matrix is the area or volume of the n-dimensional parallelepiped spanned by its column (or row) vectors and is frequently useful.

Matrices do have zero divisors:

Decomposition of tensors

Every tensor of degree 2 can be decomposed into a symmetric and an anti-symmetric tensor

The Outer product (tensor product) of a vector with itself is a symmetric tensor:

The wedge product of 2 vectors is anti-symmetric:

Any matrix X with complex entries can be expressed as

where

  • A is diagonalizable
  • N is nilpotent
  • A commutes with N (i.e. AN = NA)

This is the *Jordan–Chevalley decomposition.


Block matrix

From Wikipedia:Block matrix

The matrix

can be partitioned into 4 2×2 blocks

The partitioned matrix can then be written as

the matrix product

can be formed blockwise, yielding as an matrix with row partitions and column partitions. The matrices in the resulting matrix are calculated by multiplying:

Or, using the *Einstein notation that implicitly sums over repeated indices:

Normal matrices[]

A diagonal matrix:

The determinate of a diagonal matrix:

A superdiagonal entry is one that is directly above and to the right of the main diagonal. A subdiagonal entry is one that is directly below and to the left of the main diagonal. The eigenvalues of diag(λ1, ..., λn) are λ1, ..., λn with associated eigenvectors of e1, ..., en.

A *spectral theorem is a result about when a matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations.

A matrix is normal if and only if it is *diagonalizable.


All unitary, Hermitian, and *skew-Hermitian matrices are normal.

All orthogonal, symmetric, and skew-symmetric matrices are normal.

A Unitary matrix is a complex square matrix whose rows (or columns) form an *orthonormal basis of with respect to the usual inner product.

An orthogonal matrix is a real unitary matrix. Its columns and rows are orthogonal unit vectors (i.e., *orthonormal vectors). A permutation matrix is an orthogonal matrix.

A Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose. The diagonal elements must be real.

A symmetric matrix is a real Hermitian matrix. It is equal to its transpose.

A *Skew-Hermitian matrix is a complex square matrix whose conjugate transpose is its negative. The diagonal elements must be imaginary.

A Skew-symmetric matrix is a real Skew-Hermitian matrix. Its transpose equals its negative. AT = −A The diagonal elements must be zero.

Change of basis[]

An n-by-n square matrix A is invertible if there exists an n-by-n square matrix A-1 such that

A matrix is invertible if and only if its determinant is non-zero.

The standard basis for would be:

Given a matrix whose columns are the vectors of the new basis of the space (new basis matrix), the new coordinates for a column vector are given by the matrix product .

From Wikipedia:Matrix similarity:

Given a linear transformation:

,

it can be the case that a change of basis can result in a simpler form of the same transformation.

,
where x' and y' are in the new basis.
and P is the change-of-basis matrix.

To derive T in terms of the simpler matrix, we use:

Thus, the matrix in the original basis is given by

Therefore

From Wikipedia:Matrix similarity

Two n-by-n matrices A and B are called similar if

for some invertible n-by-n matrix P.

A transformation AP−1AP is called a similarity transformation or conjugation of the matrix A. In the *general linear group, similarity is therefore the same as *conjugacy, and similar matrices are also called conjugate.

Back to top

Linear groups[]

A square matrix of order n is an n-by-n matrix. Any two square matrices of the same order can be added and multiplied. A matrix is invertible if and only if its determinant is nonzero.

GLn(F) or GL(n, F), or simply GL(n) is the *Lie group of n×n invertible matrices with entries from the field F. The group operation is matrix multiplication. The group GL(n, F) and its subgroups are often called linear groups or matrix groups.

SL(n, F) or SLn(F), is the *subgroup of GL(n, F) consisting of matrices with a determinant of 1.
U(n), the Unitary group of degree n is the group of n × n unitary matrices. The group operation is matrix multiplication.[14] The determinant of a unitary matrix is a complex number with norm 1.
SU(n), the special unitary group of degree n, is the *Lie group of n×n unitary matrices with determinant 1.

Back to top

Symmetry groups[]

*Affine group

*Poincaré group: boosts, rotations, translations
*Lorentz group: boosts, rotations
The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of non-colinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.)

Aff(n,K): the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.

E(n): rotations, reflections, and translations.
O(n): rotations, reflections
SO(n): rotations
so(3) is the Lie algebra of SO(3) and consists of all skew-symmetric 3 × 3 matrices.

Clifford group: The set of invertible elements x such that for all v in V The *spinor norm Q is defined on the Clifford group by

PinV(K): The subgroup of elements of spinor norm 1. Maps 2-to-1 to the orthogonal group
SpinV(K): The subgroup of elements of Dickson invariant 0 in PinV(K). When the characteristic is not 2, these are the elements of determinant 1. Maps 2-to-1 to the special orthogonal group. Elements of the spin group act as linear transformations on the space of spinors

Back to top

Rotations[]

In 4 spatial dimensions a rigid object can *rotate in 2 different ways simultaneously.

Tesseract

Stereographic projection of four-dimensional Tesseract in double rotation

See also: *Hypersphere of rotations, *Rotation group SO(3), *Special unitary group, *Plate trick, *Spin representation, *Spin group, *Pin group, *Spinor, Clifford algebra, *Indefinite orthogonal group, *Root system, Bivectors, Curl


From Wikipedia:Rotation group SO(3):

Consider the solid ball in R3 of radius π. For every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The two rotations through π and through −π are the same. So we *identify (or "glue together") *antipodal points on the surface of the ball.

The ball with antipodal surface points identified is a *smooth manifold, and this manifold is *diffeomorphic to the rotation group. It is also diffeomorphic to the *real 3-dimensional projective space RP3, so the latter can also serve as a topological model for the rotation group.

These identifications illustrate that SO(3) is *connected but not *simply connected. As to the latter, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". (In other words one full rotation is not equivalent to doing nothing.)

Belt Trick

A set of belts can be continuously rotated without becoming twisted or tangled. The cube must go through two full rotations for the system to return to its initial state. See *Tangloids.

Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The *Balinese plate trick and similar tricks demonstrate this practically.

The same argument can be performed in general, and it shows that the *fundamental group of SO(3) is cyclic group of order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects known as *spinors, and is an important tool in the development of the *spin-statistics theorem.

Spin group

The *universal cover of SO(3) is a *Lie group called *Spin(3). The group Spin(3) is isomorphic to the *special unitary group SU(2); it is also diffeomorphic to the unit *3-sphere S3 and can be understood as the group of *versors (quaternions with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in *quaternions and spatial rotation. The map from S3 onto SO(3) that identifies antipodal points of S3 is a *surjective *homomorphism of Lie groups, with *kernel {±1}. Topologically, this map is a two-to-one *covering map. (See the *plate trick.)

From Wikipedia:Spin group:

The spin group Spin(n)[15][16] is the *double cover of the *special orthogonal group SO(n) = SO(n, R), such that there exists a *short exact sequence of *Lie groups (with n ≠ 2)

As a Lie group, Spin(n) therefore shares its *dimension, n(n − 1)/2, and its *Lie algebra with the special orthogonal group.

For n > 2, Spin(n) is *simply connected and so coincides with the *universal cover of *SO(n).

The non-trivial element of the kernel is denoted −1, which should not be confused with the orthogonal transform of *reflection through the origin, generally denoted −I .

Spin(n) can be constructed as a *subgroup of the invertible elements in the Clifford algebra Cl(n). A distinct article discusses the *spin representations.

Back to top

Matrix representations[]

See also: *Group representation, *Presentation of a group, *Abstract algebra

Real numbers[]

If a vector is multiplied with the the *identity matrix then the vector is completely unchanged:

And if then


Therefore can be thought of as the matrix form of the scalar a. The scalar matrices are the center of the algebra of matrices.

.
.

(Note: Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.)

Back to top

Complex numbers[]

Complex numbers can also be written in matrix form in such a way that complex multiplication corresponds perfectly to matrix multiplication:


The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin computed using the Pythagorean theorem.

Back to top

Quaternions[]

There are at least two ways of representing quaternions as matrices in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplication.

Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as

Multiplying any two Pauli matrices always yields a quaternion unit matrix. See Isomorphism to quaternions below.

By replacing each 0, 1, and i with its 2 × 2 matrix representation that same quaternion can be written as a 4 × 4 real (*block) matrix:

Therefore:

However, the representation of quaternions in M(4,ℝ) is not unique. In fact, there exist 48 distinct representations of this form. Each 4x4 matrix representation of quaternions corresponds to a multiplication table of unit quaternions. See Wikipedia:Quaternion#Matrix_representations.

The obvious way of representing quaternions with 3 × 3 real matrices does not work because:

Back to top

Vectors[]

Euclidean[]
See also: *Split-complex numbers

Unfortunately the matrix representation of a vector is not so obvious. First we must decide what properties the matrix should have. To see consider the square (*quadratic form) of a single vector:

Pythagorean

From the Pythagorean theorem we know that:

So we know that

This particular Clifford algebra is known as Cl2,0. The subscript 2 indicates that the 2 basis vectors are square roots of +1. See *Metric signature. If we had used then the result would have been Cl0,2.

The set of 3 matrices in 3 dimensions that have these properties are called *Pauli matrices. The algebra generated by the three Pauli matrices is isomorphic to the Clifford algebra of 3.

From Wikipedia:Pauli matrices

The Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian and unitary.[17] They are

Squaring a Pauli matrix results in a "scalar":

Do NOT confuse this scalar with the vectors above. It may look similar to the Pauli matrices but it is not the matrix representation of a vector. It is the matrix representation of a scalar. Scalars are totally different from vectors and the matrix representations of scalars are totally different from the matrix representations of vectors. They are NOT the same.

Multiplication is *anticommutative:

And

Exponential of a Pauli vector which is analogous to Euler's formula, extended to quaternions:

commutation relations:

*anticommutation relations:

Adding the commutator () to the anticommutator () gives the general formula for multiplying any 2 arbitrary "vectors" (or rather their matrix representations):

If is identified with the pseudoscalar then the right hand side becomes which is also the definition for the geometric product of two vectors in geometric algebra (Clifford algebra). The geometric product of two vectors is a multivector.

For any 2 arbitrary vectors:

Applying the rules of Clifford algebra we get:

Isomorphism to quaternions

Multiplying any 2 Pauli matrices results in a quaternion. Hence the geometric interpretation of the quaternion units as bivectors in 3 dimensional (not 4 dimensional) space.

Quaternions form a *division algebra—every non-zero element has an inverse—whereas Pauli matrices do not.


And multiplying a Pauli matrix and a quaternion results in a Pauli matrix:

Further reading: *Generalizations of Pauli matrices, *Gell-Mann matrices and *Pauli equation

Back to top

Pseudo-Euclidean[]
See also: *Electron magnetic moment
From Wikipedia:Gamma matrices

Gamma *matrices, , also known as the Dirac matrices, are a set of 4 × 4 conventional matrices with specific *anticommutation relations that ensure they *generate a matrix representation of the Clifford algebra C1,3(R). One gamma matrix squares to 1 times the *identity matrix and three gamma matrices square to -1 times the identity matrix.

The defining property for the gamma matrices to generate a Clifford algebra is the anticommutation relation

where is the *anticommutator, is the *Minkowski metric with signature (+ − − −) and is the 4 × 4 identity matrix.

Minkowski metric

From Wikipedia:Minkowski_space#Minkowski_metric

The simplest example of a Lorentzian manifold is *flat spacetime, which can be given as R4 with coordinates and the metric

Note that these coordinates actually cover all of R4. The flat space metric (or *Minkowski metric) is often denoted by the symbol η and is the metric used in *special relativity.

A standard basis for Minkowski space is a set of four mutually orthogonal vectors { e0, e1, e2, e3 } such that

These conditions can be written compactly in the form

Relative to a standard basis, the components of a vector v are written (v0, v1, v2, v3) where the *Einstein summation convention is used to write v = vμeμ. The component v0 is called the timelike component of v while the other three components are called the spatial components. The spatial components of a 4-vector v may be identified with a 3-vector v = (v1, v2, v3).

In terms of components, the Minkowski inner product between two vectors v and w is given by

and

Here lowering of an index with the metric was used.

The Minkowski metric[18] η is the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally a constant pseudo-Riemannian metric in Cartesian coordinates. As such it is a nondegenerate symmetric bilinear form, a type (0,2) tensor. It accepts two arguments u, v.

The definition

yields an inner product-like structure on M, previously and also henceforth, called the Minkowski inner product, similar to the Euclidean inner product, but it describes a different geometry. It is also called the relativistic dot product. If the two arguments are the same,

the resulting quantity will be called the Minkowski norm squared.

This bilinear form can in turn be written as

where [η] is a 4×4 matrix associated with η. Possibly confusingly, denote [η] with just η as is common practice. The matrix is read off from the explicit bilinear form as

and the bilinear form

with which this section started by assuming its existence, is now identified.

When interpreted as the matrices of the action of a set of orthogonal basis vectors for *contravariant vectors in Minkowski space, the column vectors on which the matrices act become a space of *spinors, on which the Clifford algebra of *spacetime acts. This in turn makes it possible to represent infinitesimal *spatial rotations and Lorentz boosts. Spinors facilitate spacetime computations in general, and in particular are fundamental to the *Dirac equation for relativistic spin-½ particles.

In Dirac representation, the four *contravariant gamma matrices are

is the time-like matrix and the other three are space-like matrices.

The matrices are also sometimes written using the 2×2 *identity matrix, , and the *Pauli matrices.

The gamma matrices we have written so far are appropriate for acting on *Dirac spinors written in the Dirac basis; in fact, the Dirac basis is defined by these matrices. To summarize, in the Dirac basis:

Another common choice is the Weyl or chiral basis,[19] in which remains the same but is different, and so is also different, and diagonal,


Original Dirac matrices
Source: Weisstein, Eric W. "Dirac Matrices." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/DiracMatrices.html

Surprisingly the 4 by 4 table above forms a multiplication table even though it is actually created by the following rules:

where and are the original 2x2 Pauli matrices and is the *Kronecker product (not the tensor product)

The Dirac matrices are commonly referred to by the following name. Note that do not refer to the original Pauli matrices.

The 16 original Dirac matrices form six anticommuting sets of five matrices each (Arfken 1985, p. 214).

Any of the 15 original Dirac matrices (excluding the identity matrix ) anticommute with eight other original Dirac matrices and commute with the remaining eight, including itself and the identity matrix.

Any of the 16 original Dirac matrices multiplied times itself equals

Higher-dimensional gamma matrices

*Analogous sets of gamma matrices can be defined in any dimension and for any signature of the metric. For example, the Pauli matrices are a set of "gamma" matrices in dimension 3 with metric of Euclidean signature (3,0). In 5 spacetime dimensions, the 4 gammas above together with the fifth gamma matrix to be presented below generate the Clifford algebra.

It is useful to define the product of the four gamma matrices as follows:

(in the Dirac basis).

Although uses the letter gamma, it is not one of the gamma matrices of C1,3(R). The number 5 is a relic of old notation in which

was called "".

From Wikipedia:Higher-dimensional gamma matrices

Consider a space-time of dimension d with the flat *Minkowski metric,

where a,b = 0,1, ..., d−1. Set N= 2d/2⌋. The standard Dirac matrices correspond to taking d = N = 4.

The higher gamma matrices are a d-long sequence of complex N×N matrices which satisfy the *anticommutator relation from the *Clifford algebra Cℓ1,d−1(R) (generating a representation for it),

where IN is the *identity matrix in N dimensions. (The spinors acted on by these matrices have N components in d dimensions.) Such a sequence exists for all values of d and can be constructed explicitly, as provided below.

The gamma matrices have the following property under hermitian conjugation,


Further reading: Quan­tum Me­chan­ics for En­gi­neers and How (not) to teach Lorentz covariance of the Dirac equation

Back to top

Multivectors[]

See also: *Dirac algebra

External links:

A brief introduction to geometric algebra
A brief introduction to Clifford algebra
The Construction of Spinors in Geometric Algebra
Functions of Multivector Variables
Clifford Algebra Representations

Clifford algebra is a type of algebra characterized by the geometric product of scalars, vectors, bivectors, trivectors...etc.

Just as a vector has length so a bivector has area and a trivector has volume.

Just as a vector has direction so a bivector has orientation. In three dimensions a trivector has only one possible orientation and is therefore a pseudoscalar. But in four dimensions a trivector becomes a pseudovector and the quadvector becomes the pseudoscalar.

Multiplication of arbitrary vectors[]

The dot product of two vectors is:

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} \mathbf{u} \cdot \mathbf{v} &= {\color{blue}\text{vector}} \cdot {\color{blue}\text{vector}} \\ &= (u_{x} + u_{y})(v_{x} + v_{y}) \\ &= {\color{red}u_{x} v_{x} + u_{y} v_{y}} \end{split} }

But this is actually quite mysterious. When we multiply and we dont get so why is it that when we multiply vectors we only multiply parallel components? Clifford algebra has a surprisingly simple answer. The answer is: We dont! Instead of the dot product or the wedge product Clifford algebra uses the geometric product.

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} \mathbf{u} \mathbf{v} &= (u_{x} {\color{blue}e_{x}} + u_{y} {\color{blue} e_{y}} ) (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue} e_{y}} ) \\ &= u_{x} v_{x} {\color{red}e_{x} e_{x}} + u_{x} v_{y} {\color{green}e_{x} e_{y}} + u_{y} v_{x} {\color{green}e_{y} e_{x}} + u_{y} v_{y} {\color{red}e_{y} e_{y}} \\ &= u_{x} v_{x} {\color{red}(1)} + u_{y} v_{y} {\color{red}(1)} + u_{x} v_{y} {\color{green}e_{x} e_{y}} - u_{y} v_{x} {\color{green}e_{x} e_{y}} \\ &= (u_{x} v_{x} + u_{y} v_{y}){\color{red}(1)} + (u_{x} v_{y} - u_{y} v_{x}) {\color{green}e_{xy}} \\ &= {\color{red}\text{scalar}} + {\color{green}\text{bivector}} \end{split} }

A scalar plus a bivector (or any number of blades of different grade) is called a multivector. The idea of adding a scalar and a bivector might seem wrong but in the real world it just means that what appears to be a single equation is in fact a set of *simultaneous equations.

For example:

would just mean that:
Failed to parse (syntax error): {\displaystyle (u_{x} v_{x} + u_{y} v_{y}){\color{red}(1)} = 5 \\ \quad \quad \quad \text{and} \\ (u_{x} v_{y} - u_{y} v_{x}) {\color{green}e_{xy}} = 0 }

Back to top

Rules[]

All the properties of Clifford algebra derive from a few simple rules.

Let and be perpendicular unit vectors.

Multiplying two perpendicular vectors results in a bivector:

Multiplying three perpendicular vectors results in a trivector:

Multiplying parallel vectors results in a scalar:

Clifford algebra is associative therefore the fact that multiplying parallel vectors results in a scalar means that:

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} {\color{green}(e_{x} e_{y})} {\color{blue} (e_{y}) } &= {\color{blue} e_{x}} {\color{red}(e_{y} e_{y})} \\ &= {\color{blue} e_{x}} {\color{red}(1)} \\ &= {\color{blue} e_{x}} \end{split} }
and:
and:

Rotation from x to y is the negative of rotation from y to x:

Therefore:
Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} {\color{green}(e_{x} e_{y})} {\color{blue} (e_{x}) } &= \phantom{-}{\color{blue} e_{x}} {\color{green}(e_{y} e_{x})} \\ &= {\color{blue} -e_{x}} {\color{green}(e_{x} e_{y})} \\ &= -{\color{red} (e_{x} e_{x})} {\color{blue} e_{y}} \\ &= -{\color{red} (1)} {\color{blue} e_{y}} \\ &= -{\color{blue} e_{y}} \end{split} }

Back to top

Multiplication tables[]

In one dimension:
In two dimensions:
In three dimensions:
In four dimensions:

Back to top

Basis[]

Every multivector of the Clifford algebra can be expressed as a linear combination of the canonical basis elements. The basis elements of the Clifford algebra Cℓ3 are and the general element of Cℓ3 is given by

If are all real then the Clifford algebra is Cℓ3(R). If the coefficients are allowed to be complex then the Clifford algebra is Cℓ3(C).

A multivector can be separated into components of different grades:

Failed to parse (syntax error): {\displaystyle \langle \mathbf{A} \rangle_0 = a_0{\color{red}(1)}\\ \langle \mathbf{A} \rangle_1 = a_1 {\color{blue}e_x} + a_2 {\color{blue}e_y} + a_3 {\color{blue}e_z}\\ \langle \mathbf{A} \rangle_2 = a_4 {\color{green}e_{xy}} + a_5 {\color{green}e_{xz}} + a_6 {\color{green}e_{yz}}\\ \langle \mathbf{A} \rangle_3 = a_7 {\color{orange}e_{xyz}}\\ }

The elements of even grade form a subalgebra because the sum or product of even grade elements always results in an element of even grade. The elements of odd grade do not form a subalgebra.

Back to top

Relation to other algebras[]

Failed to parse (syntax error): {\displaystyle Cℓ_0 (\mathbf{R})} : Real numbers (scalars). A scalar can (and should) be thought of as zero vectors multiplied together. See Empty product.

Failed to parse (syntax error): {\displaystyle Cℓ_0 (\mathbf{C})} : Complex numbers

Failed to parse (syntax error): {\displaystyle Cℓ_1 (\mathbf{R})} : Split-complex numbers

Failed to parse (syntax error): {\displaystyle Cℓ_1 (\mathbf{C})} : Bicomplex numbers

Failed to parse (syntax error): {\displaystyle Cℓ_2^0 (\mathbf{R})} : Complex numbers (The superscript 0 indicates the even subalgebra)

Failed to parse (syntax error): {\displaystyle Cℓ_3^0 (\mathbf{R})} : Quaternions

Failed to parse (syntax error): {\displaystyle Cℓ_3^0 (\mathbf{C})} : Biquaternions

Back to top

Multivector multiplication using tensors[]

To find the product

we have to multiply every component of the first multivector with every component of the second multivector.

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} AB = & \phantom{+} (a_0 b_0 {\color{red}1}{\color{red}1} + a_0 b_1 {\color{red}1}{\color{blue} e_{x}} + a_0 b_2 {\color{red}1}{\color{blue} e_{y}} + a_0 b_3 {\color{red}1}{\color{green} e_{xy}}) \\ &+ (a_1 b_0 {\color{blue} e_{x}}{\color{red}1} + a_1 b_1 {\color{blue} e_{x}}{\color{blue} e_{x}} + a_1 b_2 {\color{blue} e_{x}}{\color{blue} e_{y}} + a_1 b_3 {\color{blue} e_{x}}{\color{green} e_{xy}}) \\ &+ (a_2 b_0 {\color{blue} e_{y}}{\color{red}1} + a_2 b_1 {\color{blue} e_{y}}{\color{blue} e_{x}} + a_2 b_2 {\color{blue} e_{y}}{\color{blue} e_{y}} + a_2 b_3 {\color{blue} e_{y}}{\color{green} e_{xy}}) \\ &+ (a_3 b_0 {\color{green} e_{xy}}{\color{red}1} + a_3 b_1 {\color{green} e_{xy}}{\color{blue} e_{x}} + a_3 b_2 {\color{green} e_{xy}}{\color{blue} e_{y}} + a_3 b_3 {\color{green} e_{xy}}{\color{green} e_{xy}}) \end{split} }

Then we reduce each of the 16 resulting terms to its standard form.

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} AB = & \phantom{+} (a_0 b_0 {\color{red}1} + a_0 b_1 {\color{blue} e_{x}} + a_0 b_2 {\color{blue} e_{y}} + a_0 b_3 {\color{green} e_{xy}}) \\ &+ (a_1 b_0 {\color{blue} e_{x}} + a_1 b_1 {\color{red}1} + a_1 b_2 {\color{green} e_{xy}} + a_1 b_3 {\color{blue} e_{y}}) \\ &+ (a_2 b_0 {\color{blue} e_{y}} - a_2 b_1 {\color{green} e_{xy}} + a_2 b_2 {\color{red}1} - a_2 b_3 {\color{blue} e_{x}}) \\ &+ (a_3 b_0 {\color{green} e_{xy}} - a_3 b_1 {\color{blue} e_{y}} + a_3 b_2 {\color{blue} e_{x}} - a_3 b_3 {\color{red}1}) \end{split} }

Finally we collect like products into the four components of the final multivector.

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} AB = & \phantom{+} ( a_0 b_0 + a_1 b_1 + a_2 b_2 - a_3 b_3 ) {\color{red}1} \\ &+ ( a_1 b_0 + a_0 b_1 + a_3 b_2 - a_2 b_3 ) {\color{blue} e_{x} } \\ &+ ( a_2 b_0 - a_3 b_1 + a_0 b_2 + a_1 b_3 ) {\color{blue} e_{y} } \\ &+ ( a_3 b_0 - a_2 b_1 + a_1 b_2 + a_0 b_3 ) {\color{green} e_{xy} } \end{split} }

This is all very tedious and error-prone. It would be nice if there was some way to cut straight to the end. Tensor notation allows us to do just that.

To find the tensor that we need we first need to know which terms end up as scalars, which terms end up as vectors...etc. There is an easy way to do this and it involves the multiplication table.

First lets start with an easy one.

Back to top

Complex numbers[]

The multiplication table for Failed to parse (syntax error): {\displaystyle Cℓ_2^0 (\mathbf{R})} (Which is isomorphic to complex numbers)

We can see then that:

It worked! All the terms in the first row are scalars and all the terms in the second row are bivectors. This is exactly what we are looking for.

Pay special attention to the signs in the final matrix above.

Therefore to find the product

We would multiply:

Each row of the final matrix has exactly the right terms with exactly the right signs.

The vector above represents a complex number. You should think of the first column of the matrix above as representing another complex number. All the other terms in the matrix are just there to make our lives a little bit easier.

It works. It works so well that complex numbers can be represented as matrices as:

Which corresponds perfectly to a multiplication table for complex numbers:

Back to top

Quaternions[]

The multiplication table for Failed to parse (syntax error): {\displaystyle Cℓ_3^0 (\mathbf{R})} (Which is isomorphic to quaternions) is:

The entire 2nd row of the multiplication table is just multiplied by the entire first row.
The entire 3rd row of the multiplication table is just multiplied by the entire first row.
The entire 4th row of the multiplication table is just multiplied by the entire first row.

We can see then that if we multiply each row by the first row again then we get:

This works because we have in effect multiplied each term by a second term twice. In other words we have multiplied every term by the square of another term and the square of every term is either 1 or -1.

Therefore to find the product

We would multiply:

Just as complex numbers can be represented as matrices, so a quaternion can be represented as:

Which corresponds to a multiplication table for quaternions:

1 −k −j −i
k 1 i −j
j −i 1 k
i j k 1

Back to top

CL2[]

The multiplication table for Failed to parse (syntax error): {\displaystyle Cℓ_{2} (\mathbf{R})} is:

We can see then that:

Therefore to find the product

We would multiply:

Back to top

Squares of pseudoscalars are either 1 or -1[]

In 0 dimensions:

In 1 dimension:

In 2 dimensions:

In 3 dimensions:

In 4 dimensions:

In 5 dimensions:

In 6 dimensions:

In 7 dimensions:

In 8 dimensions:

In 9 dimensions:

Back to top

Bivectors in higher dimensions[]

A simple bivector can be used to represent a single rotation.

In four dimensions a rigid object can rotate in two different ways simultaneously. Such a rotation can only be represented as the sum of two simple bivectors.

In six dimensions a rigid object can rotate in three different ways simultaneously. Such a rotation can only be represented as the sum of three simple bivectors.

From Wikipedia:Bivector

The wedge product of two vectors is a bivector, but not all bivectors are wedge products of two vectors. For example, in four dimensions the bivector

cannot be written as the wedge product of two vectors. A bivector that can be written as the wedge product of two vectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions;

A bivector has a real square if and only if it is simple.

But:

Back to top

Other quadratic forms[]

The square of a vector is:

Failed to parse (unknown function "\begin{split}"): {\displaystyle \begin{split} \mathbf{v} \mathbf{v} &= (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue}e_{y}}) (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue}e_{y}}) \\ &= v_{x} v_{x} {\color{red}e_{x} e_{x}} + v_{x} v_{y} {\color{green}e_{x} e_{y}} + v_{y} v_{x} {\color{green}e_{y} e_{x}} + v_{y} v_{y} {\color{red}e_{y} e_{y}} \\ &= v_{x} v_{x} {\color{red}(1)} + v_{y} v_{y} {\color{red}(1)} + v_{x} v_{y} {\color{green}e_{x} e_{y}} - v_{y} v_{x} {\color{green}e_{x} e_{y}} \\ &= (v_{x} v_{x} + v_{y} v_{y}){\color{red}(1)} + (v_{x} v_{y} - v_{y} v_{x}) {\color{green}e_{xy}} \\ &= (v_{x}^2 + v_{y}^2){\color{red}(1)} + (0) {\color{green}e_{xy}} \\ &= (v_{x}^2 + v_{y}^2){\color{red}(1)} \\ &= {\color{red}\text{scalar}} \end{split} }
() is called the quadratic form. In this case both terms are positive but some Clifford algebras have quadratic forms with negative terms. Some have both positive and negative terms.

From Wikipedia:Clifford algebra:

Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form:

where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the *signature of the quadratic form. The real vector space with this quadratic form is often denoted Rp,q. The Clifford algebra on Rp,q is denoted Cℓp,q(R). The symbol Cℓn(R) means either Cℓn,0(R) or Cℓ0,n(R) depending on whether the author prefers positive-definite or negative-definite spaces.

A standard basis {ei} for Rp,q consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. The algebra Cℓp,q(R) will therefore have p vectors that square to +1 and q vectors that square to −1.


From Wikipedia:Spacetime algebra:

*Spacetime algebra (STA) is a name for the Clifford algebra Cl3,1(R), or equivalently the geometric algebra G(M4), which can be particularly closely associated with the geometry of special relativity and relativistic spacetime. See also *Algebra of physical space.

The spacetime algebra may be built up from an orthogonal basis of one time-like vector and three space-like vectors, , with the multiplication rule

where is the Minkowski metric with signature (− + + +).

Thus:

The basis vectors share these properties with the *Gamma matrices, but no explicit matrix representation need be used in STA.


Failed to parse (syntax error): {\displaystyle Cℓ_{3,0} (\mathbf{R})} :Algebra of physical space (Time = scalar)

Failed to parse (syntax error): {\displaystyle Cℓ_{3,1} (\mathbf{R})} :Spacetime algebra (Time = vector)

Failed to parse (syntax error): {\displaystyle Cℓ_{0,2} (\mathbf{R})} :Quaternions (Three quaternions = two vectors that square to -1 and one bivector that squares to -1)

Back to top

Rotors[]

See also: *Rotor (mathematics)
From Wikipedia:Geometric algebra

The inverse of a vector is:

The projection of onto (or the parallel part) is

and the rejection of from (or the orthogonal part) is

The reflection of a vector along a vector , or equivalently across the hyperplane orthogonal to , is the same as negating the component of a vector parallel to . The result of the reflection will be

If a is a unit vector then and therefore

is called the sandwich product which is called a double-sided product.

If we have a product of vectors then we denote the reverse as

Any rotation is equivalent to 2 reflections.

R is called a Rotor

If a and b are unit vectors then the rotor is automatically normalised:

2 rotations becomes:

R2R1 represents Rotor R1 rotated by Rotor R2. This would be called a single-sided transformation. (R2R1R2 would be double-sided.) Therefore rotors do not transform double-sided the same way that other objects do. They transform single-sided.

Back to top

Quaternions[]

The square root of the product of a quaternion with its conjugate is called its *norm:

A unit quaternion is a quaternion of norm one. Unit quaternions, also known as *versors, provide a convenient mathematical notation for representing orientations and rotations of objects in three dimensions.


From Wikipedia:Quaternions and spatial rotation

Every nonzero quaternion has a multiplicative inverse

Thus quaternions form a *division algebra.

The inverse of a unit quaternion is obtained simply by changing the sign of its imaginary components.

A *3-D Euclidean vector such as (2, 3, 4) or (ax, ay, az) can be rewritten as 0 + 2 i + 3 j + 4 k or 0 + axi + ayj + azk, where i, j, k are unit vectors representing the three *Cartesian axes. A rotation through an angle of θ around the axis defined by a unit vector

can be represented by a quaternion. This can be done using an *extension of Euler's formula:

It can be shown that the desired rotation can be applied to an ordinary vector in 3-dimensional space, considered as a quaternion with a real coordinate equal to zero, by evaluating the conjugation of p by q:

using the *Hamilton product

The conjugate of a product of two quaternions is the product of the conjugates in the reverse order.

Conjugation by the product of two quaternions is the composition of conjugations by these quaternions: If p and q are unit quaternions, then rotation (conjugation) by pq is

,

which is the same as rotating (conjugating) by q and then by p. The scalar component of the result is necessarily zero.

The imaginary part of a quaternion behaves like a vector in three dimension vector space, and the real part a behaves like a *scalar in R. When quaternions are used in geometry, it is more convenient to define them as *a scalar plus a vector:

When multiplying the vector/imaginary parts, in place of the rules i2 = j2 = k2 = ijk = −1 we have the quaternion multiplication rule:

From these rules it follows immediately that (*see details):

It is important to note, however, that the vector part of a quaternion is, in truth, an "axial" vector or "pseudovector", not an ordinary or "polar" vector.


From Wikipedia:Quaternion#Quaternions_as_the_even_part_of_Cℓ3,0(R):

the reflection of a vector r in a plane perpendicular to a unit vector w can be written:

Two reflections make a rotation by an angle twice the angle between the two reflection planes, so

corresponds to a rotation of 180° in the plane containing σ1 and σ2.

This is very similar to the corresponding quaternion formula,

In fact, the two are identical, if we make the identification

and it is straightforward to confirm that this preserves the Hamilton relations

In this picture, quaternions correspond not to vectors but to bivectors – quantities with magnitude and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbers becomes clearer, too: in 2D, with two vector directions σ1 and σ2, there is only one bivector basis element σ1σ2, so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ1σ2, σ2σ3, σ3σ1, so three imaginaries.

The usefulness of quaternions for geometrical computations can be generalised to other dimensions, by identifying the quaternions as the even part Cℓ+3,0(R) of the Clifford algebra Cℓ3,0(R).

Back to top

Spinors[]

See also: *Bispinor

External link:An introduction to spinors

Spinors may be regarded as non-normalised rotors which transform single-sided.[20]

Note: The (real) *spinors in three-dimensions are quaternions, and the action of an even-graded element on a spinor is given by ordinary quaternionic multiplication.[21]

A spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360°. This property characterizes spinors.[22]


From Wikipedia:Orientation entanglement

In three dimensions...the *Lie group *SO(3) is not *simply connected. Mathematically, one can tackle this problem by exhibiting the *special unitary group SU(2), which is also the *spin group in three *Euclidean dimensions, as a *double cover of SO(3).

SU(2) is the following group,

where the overline denotes *complex conjugation.

For comparison: Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as

If X = (x1,x2,x3) is a vector in R3, then we identify X with the 2 × 2 matrix with complex entries

Note that −det(X) gives the square of the Euclidean length of X regarded as a vector, and that X is a *trace-free, or better, trace-zero *Hermitian matrix.

The unitary group acts on X via

where M ∈ SU(2). Note that, since M is unitary,

, and
is trace-zero Hermitian.

Hence SU(2) acts via rotation on the vectors X. Conversely, since any *change of basis which sends trace-zero Hermitian matrices to trace-zero Hermitian matrices must be unitary, it follows that every rotation also lifts to SU(2). However, each rotation is obtained from a pair of elements M and −M of SU(2). Hence SU(2) is a double-cover of SO(3). Furthermore, SU(2) is easily seen to be itself simply connected by realizing it as the group of unit *quaternions, a space *homeomorphic to the *3-sphere.

A unit quaternion has the cosine of half the rotation angle as its scalar part and the sine of half the rotation angle multiplying a unit vector along some rotation axis (here assumed fixed) as its pseudovector (or axial vector) part. If the initial orientation of a rigid body (with unentangled connections to its fixed surroundings) is identified with a unit quaternion having a zero pseudovector part and +1 for the scalar part, then after one complete rotation (2pi rad) the pseudovector part returns to zero and the scalar part has become -1 (entangled). After two complete rotations (4pi rad) the pseudovector part again returns to zero and the scalar part returns to +1 (unentangled), completing the cycle.


From Wikipedia:Spinors in three dimensions

The association of a spinor with a 2×2 complex *Hermitian matrix was formulated by Élie Cartan.[23]

In detail, given a vector x = (x1, x2, x3) of real (or complex) numbers, one can associate the complex matrix

Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3-space:

  • det X = – (length x)2.
  • X 2 = (length x)2I, where I is the identity matrix.
  • [23]
  • where Z is the matrix associated to the cross product z = x × y.
  • If u is a unit vector, then −UXU is the matrix associated to the vector obtained from x by reflection in the plane orthogonal to u.
  • It is an elementary fact from *linear algebra that any rotation in 3-space factors as a composition of two reflections. (Similarly, any orientation reversing orthogonal transformation is either a reflection or the product of three reflections.) Thus if R is a rotation, decomposing as the reflection in the plane perpendicular to a unit vector u1 followed by the plane perpendicular to u2, then the matrix U2U1XU1U2 represents the rotation of the vector x through R.

Having effectively encoded all of the rotational linear geometry of 3-space into a set of complex 2×2 matrices, it is natural to ask what role, if any, the 2×1 matrices (i.e., the *column vectors) play. Provisionally, a spinor is a column vector

with complex entries ξ1 and ξ2.

The space of spinors is evidently acted upon by complex 2×2 matrices. Furthermore, the product of two reflections in a given pair of unit vectors defines a 2×2 matrix whose action on euclidean vectors is a rotation, so there is an action of rotations on spinors.

Often, the first example of spinors that a student of physics encounters are the 2×1 spinors used in Pauli's theory of electron spin. The *Pauli matrices are a vector of three 2×2 *matrices that are used as *spin *operators.

Given a *unit vector in 3 dimensions, for example (a, b, c), one takes a *dot product with the Pauli spin matrices to obtain a spin matrix for spin in the direction of the unit vector.

The *eigenvectors of that spin matrix are the spinors for spin-1/2 oriented in the direction given by the vector.

Example: u = (0.8, -0.6, 0) is a unit vector. Dotting this with the Pauli spin matrices gives the matrix:

The eigenvectors may be found by the usual methods of *linear algebra, but a convenient trick is to note that a Pauli spin matrix is an *involutory matrix, that is, the squareof the above matrix is the identity matrix.

Thus a (matrix) solution to the eigenvector problem with eigenvalues of ±1 is simply 1 ± Su. That is,

One can then choose either of the columns of the eigenvector matrix as the vector solution, provided that the column chosen is not zero. Taking the first column of the above, eigenvector solutions for the two eigenvalues are:

The trick used to find the eigenvectors is related to the concept of *ideals, that is, the matrix eigenvectors (1 ± Su)/2 are *projection operators or *idempotents and therefore each generates an ideal in the Pauli algebra. The same trick works in any *Clifford algebra, in particular the *Dirac algebra that are discussed below. These projection operators are also seen in *density matrix theory where they are examples of pure density matrices.

More generally, the projection operator for spin in the (a, b, c) direction is given by

and any non zero column can be taken as the projection operator. While the two columns appear different, one can use a2 + b2 + c2 = 1 to show that they are multiples (possibly zero) of the same spinor.


From Wikipedia:Tensor#Spinors:

When changing from one *orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not *simply connected (see *orientation entanglement and *plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.[24] A *spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.[25][26]

Succinctly, spinors are elements of the *spin representation of the rotation group, while tensors are elements of its *tensor representations. Other *classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.


From Wikipedia:Spinor:

Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966: "Spinors...provide a linear representation of the group of rotations in a space with any number of dimensions, each spinor having components where or ." The star (*) refers to Cartan 1913.

(Note: is the number of *simultaneous independent rotations an object can have in n dimensions.)

Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, and the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.

In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the two-component complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves, realizing it as a group of rotations among them, but it also acts on the column vectors (that is, the spinors).


From Wikipedia:Spinor:

In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. More precisely, it is the fermions of spin-1/2 that are described by spinors, which is true both in the relativistic and non-relativistic theory. The wavefunction of the non-relativistic electron has values in 2 component spinors transforming under three-dimensional infinitesimal rotations. The relativistic *Dirac equation for the electron is an equation for 4 component spinors transforming under infinitesimal Lorentz transformations for which a substantially similar theory of spinors exists.

Back to top

Next section: Intermediate mathematics/Functions


Search Math wiki[]

See also[]

External links[]

References[]

  1. Wikipedia:Division algebra
  2. Wikipedia:Lie group
  3. Wikipedia:Cartesian product
  4. Wikipedia:Tangent bundle
  5. Wikipedia:Lie group
  6. Wikipedia:Topological space
  7. Wikipedia:Normed vector space
  8. Wikipedia:Norm (mathematics)
  9. Wikipedia:Norm (mathematics)
  10. Wikipedia:Sesquilinear form
  11. Wikipedia:Outer product
  12. Wikipedia:Tensor (intrinsic definition)
  13. Wikipedia:Tensor
  14. Wikipedia:Special unitary group
  15. Lawson, H. Blaine; Michelsohn, Marie-Louise (1989). Spin Geometry. Princeton University Press. ISBN 978-0-691-08542-5  page 14
  16. Friedrich, Thomas (2000), Dirac Operators in Riemannian Geometry, American Mathematical Society, ISBN 978-0-8218-2055-1  page 15
  17. "Pauli matrices". Planetmath website. 28 March 2008. http://planetmath.org/PauliMatrices. Retrieved 28 May 2013. 
  18. The Minkowski inner product is not an *inner product, since it is not *positive-definite, i.e. the *quadratic form η(v, v) need not be positive for nonzero v. The positive-definite condition has been replaced by the weaker condition of non-degeneracy. The bilinear form is said to be indefinite.
  19. The matrices in this basis, provided below, are the similarity transforms of the Dirac basis matrices of the previous paragraph, , where .
  20. Wikipedia:Rotor (mathematics)
  21. Wikipedia:Spinor#Three_dimensions
  22. Wikipedia:Spinor
  23. 23.0 23.1 Cartan, Élie (1981) [1938], The Theory of Spinors, New York: Dover Publications, ISBN 978-0-486-64070-9, MR 631850, https://books.google.com/books?isbn=0486640701 
  24. Roger Penrose (2005). The road to reality: a complete guide to the laws of our universe. Knopf. pp. 203–206. 
  25. E. Meinrenken (2013), "The spin representation", Clifford Algebras and Lie Theory, Ergebnisse der Mathematik undihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics, 58, Springer-Verlag, doi:10.1007/978-3-642-36216-3_3 
  26. S.-H. Dong (2011), "Chapter 2, Special Orthogonal Group SO(N)", Wave Equations in Higher Dimensions, Springer, pp. 13–38 

Purge

Advertisement