It has been known since the time of Euclidw that all of geometry can be derived from a handful of objects (points, lines...), a few actions on those objects, and a small number of axoimsw. Every field of science likewise can be reduced to a small set of objects, actions, and rules. Math itself is not a single field but rather a constellation of related fields. One way in which new fields are created is by the process of generalization.

A generalization is the formulation of general concepts from specific instances by abstracting common properties. Generalization is the process of identifying the parts of a whole, as belonging to the whole.[1]


Mathematical notationw can be extremely intimidating. Wikipedia is full of articles with page after page of indecipherable text. At first glance this article might appear to be the same. I want to assure the reader that every effort has been made to simplify everything as much as possible while also providing links to articles with more in-depth information.

The following has been assembled from countless small pieces gathered from throughout the world wide web. I cant guarantee that there are no errors in it. Please report any errors or omissions on this articles talk page.



See: Peano axiomsw and Hyperoperation*

The basis of all of mathematics is the "Next"* function. See Graph theoryw. Next(0)=1, Next(1)=2, Next(2)=3, Next(3)=4. (We might express this by saying that One differs from nothing as two differs from one.) This defines the Natural numbersw (denoted \mathbb{N}_0). Natural numbers are those used for counting. See Tutorial:Counting.

These have the convenient property of being transitivew. That means that if a<b and b<c then it follows that a<c. In fact they are totally orderedw. See Order theory*.

Additionw (See Tutorial:arithmetic) is defined as repeatedly calling the Next function, and its inverse is subtractionw. But this leads to the ability to write equations like 1-3=x for which there is no answer among natural numbers. To provide an answer mathematicians generalize to the set of all integersw (denoted \mathbb{Z}) which includes negative integers.

The Additive identityw is zero because x + 0 = x.
0 is an idempotent* element for addition since 0 + 0 = 0
The absolute value or modulus of x is defined as |x| = \left\{
     x, & \text{if }  x \geq 0 \\
     -x, & \text{if } x < 0.
Absolute value is an idempotent* function since abs(abs(x)) = abs(x)
Integers form a ring* (denoted \mathcal O_\mathbb{Q}) over the field of rational numbers. Ringw is defined below.
Zn is used to denote the set of integers modulo n *.
Modular arithmetic is essentially arithmetic in the quotient ringw Z/nZ (which has n elements).
An ideal* is a special subset of a ring. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3.
Ulam 1

The Ulam spiral*. Black pixels = prime numbers*.

The study of integers is called Number theoryw.
a \mid b means a divides b.
a \nmid b means a does not divide b.
p^a \mid\mid n means pa exactly divides n (i.e. pa divides n but pa+1 does not).
A prime number is a number that can only be divided by itself and one.
If a, b, c, and d are primes then the Least common multiplew of abc and c2d is abc2d. (See Tutorial:least common multiples)

Multiplicationw (See Tutorial:multiplication) is defined as repeated addition, and its inverse is divisionw. But this leads to equations like 3/2=x for which there is no answer. The solution is to generalize to the set of rational numbersw (denoted \mathbb{Q}) which include fractions (See Tutorial:fractions). Any number which isnt rational is irrationalw. See also p-adic number*

Rational numbers form a field. A Fieldw is defined below.
Rational numbers form a division algebra*.
The set of all rational numbers minus zero forms a multiplicative Group*.
The Multiplicative identityw is one because x * 1 = x.
Division by zero is undefined and undefinablew. 1/0 exists nowhere on the complex planew. It does, however, exist on the Riemann spherew (often called the extended complex plane) where it is surprisingly well behaved. See also Wheel theory* and L'Hôpital's rulew.
(Addition and multiplication are fast but division is slow even for computers*.)

Exponentiationw (See Tutorial:exponents) is defined as repeated multiplication, and its inverses are rootsw and logarithmsw. But this leads to multiple equations with no solutions:

Equations like \sqrt{2}=x. The solution is to generalize to the set of algebraic numbersw (denoted \mathbb{A}). See also algebraic integer*. To see a proof that the square root of two is irrational see Square root of 2w.
Equations like 2^{\sqrt{2}}=x The solution (because x is transcendentalw) is to generalize to the set of Real numbersw (denoted \mathbb{R}).
A plus bi
Equations like \sqrt{-1}=x and e^x=-1. The solution is to generalize to the set of complex numbersw (denoted \mathbb{C}) by defining i = sqrt(-1). A single complex number z=a+bi consists of a real part a and an imaginary part bi (See Tutorial:complex numbers). Imaginary numbersw (denoted \mathbb{I}) often occur in equations involving change with respect to time. If friction is resistance to motion then imaginary friction would be resistance to change of motion wrt time. (In other words, imaginary friction would be mass.) In fact, in the equation for the Spacetime intervalw (given below), time itself is an imaginary quantity.
Complex numbers can be used to represent and perform rotationsw but only in 2 dimensions. Hypercomplex numbersw like quaternionsw (denoted \mathbb{H}), octonionsw (denoted \mathbb{O}), and sedenions* (denoted \mathbb{S}) are one way to generalize complex numbers to some (but not all) higher dimensions.
Split-complex numbers* (hyperbolic complex numbers) are similar to complex numbers except that i2 = +1.
The Complex conjugatew of the complex number z=a+bi is \overline{z}=a-bi. (Not to be confused with the dualw of a vector.)
Complex numbers form a K-algebra* because complex multiplication is Bilinear*.
\sqrt{-100} * \sqrt{-100} = 10i * 10i = -100 \neq \sqrt{-100 * -100}
The complex number a+bi written in matrix notationw is: 
     a  & b \\
    -b  & a 
     a  & -b \\
    b  & a 
(a+ib)(c+id) is 
     a  & b \\
    -b  & a 
     c  & d \\
    -d  & c 
\end{bmatrix} =
     ac-bd  & ad+bc \\
    -(ad+bc)  & ac-bd
The complex numbers are not orderedw. However the absolute valuew or modulus* of a complex number is:
|z| = |a + ib| = \sqrt{a^2+b^2}
 |z|^2 =
  a & -b  \\
  b &  a
= a^2 + b^2.
(See Determinantw)
There are n solutions of \sqrt[n]{z}
0^0 = 1. See Empty productw.
\log_b(x) = \frac{\log_a(x)}{\log_a(b)}

Tetrationw is defined as repeated exponentiation and its inverses are called super-root and super-logarithm.


 {}^{b}a &

   = &

 \underbrace{a^{a^{{}^{.\,^{.\,^{.\,^a}}}}}} &

   = &

 a\uparrow\uparrow b

   = &

 \underbrace{a\uparrow (a\uparrow(\dots\uparrow a))}  &


    & & b\mbox{ copies of }a


    & & b\mbox{ copies of }a


When a quantity, like the charge of a single electron, becomes so small that it is insignificant we, quite justifiably, treat it as though it were zero. A quantity that can be treated as though it were zero, even though it very definitely is not, is called infinitesimal. If q is a finite ( q \cdot 1 ) amount of charge then using Leibniz's notationw dq would be an infinitesimal ( q \cdot 1/\infty ) amount of charge. See Differentialw

Likewise when a quantity becomes so large that a regular finite quantity becomes insignificant then we call it infinite. We would say that the mass of the ocean is infinite ( M \cdot \infty ). But compared to the mass of the Milky Way galaxy our ocean is insignificant. So we would say the mass of the Galaxy is doubly infinite ( M \cdot \infty^2 ).

Infinity and the infinitesimal are called Hyperreal numbersw (denoted {}^*\mathbb{R}). Hyperreals behave, in every way, exactly like real numbers. For example, 2 \cdot \infty is exactly twice as big as \infty. In reality, the mass of the ocean is a real number so it is hardly surprising that it behaves like one. See Epsilon numbers* and Big O notation*

Back to top


[-2,5[ or [-2,5) denotes the intervalw from -2 to 5, including -2 but excluding 5.
[3..7] denotes all integers from 3 to 7.
The set of all reals is unbounded at both ends.
An open interval does not include its endpoints.
Compactness* is a property that generalizes the notion of a subset being closed and bounded.
The unit interval* is the closed interval [0,1]. It is often denoted I.
The unit square* is a square whose sides have length 1.
Often, "the" unit square refers specifically to the square in the Cartesian planew with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
The unit disk* in the complex plane is the set of all complex numbers of absolute value less than one and is often denoted  \mathbb {D}

Back to top


See also: Algebraic geometry*, Algebraic variety*, Scheme*, Algebraic manifold*, and Linear algebraw

The one dimensional number line can be generalized to a multidimensional Cartesian coordinate systemw thereby creating multidimensional math (i.e. geometryw).

For sets A and B, the Cartesian product A × B is the set of all ordered pairsw (a, b) where aA and bB.[2] The direct product* generalizes the Cartesian product. (See also Direct sum*)

\mathbb{R}^3 is the Cartesian productw \mathbb{R} \times \mathbb{R} \times \mathbb{R}.
\mathbb{R}^\infty = \mathbb{R}^\mathbb{N}
\mathbb{C}^3 is the Cartesian productw \mathbb{C} \times \mathbb{C} \times \mathbb{C} (See Complexification*)

A vector spacew is a coordinate spacew with vector additionw and scalar multiplicationw (multiplication of a vector and a scalarw belonging to a fieldw.

3D Vector

i, j, and k are basis vectors
a = axi + ayj + azk

If {\mathbf e_1} , {\mathbf e_2} , {\mathbf e_3} are orthogonalw unitw basis vectors*
and {\mathbf u} , {\mathbf v} , {\mathbf x} are arbitrary vectors then we can (and usually do) write:
\mathbf{u} = u_1 \mathbf{e_1} + u_2 \mathbf{e_2} + u_3 \mathbf{e_3} = \begin{bmatrix} u_1 & u_2 & u_3 \end{bmatrix}
\mathbf{v} = v_1 \mathbf{e_1} + v_2 \mathbf{e_2} + v_3 \mathbf{e_3} = \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix}
\mathbf{x} = x_1 \mathbf{e_1} + x_2 \mathbf{e_2} + x_3 \mathbf{e_3} = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}
A module* generalizes a vector space by allowing multiplication of a vector and a scalar belonging to a ringw.

Coordinate systems define the length of vectors parallel to one of the axes but leave all other lengths undefined. This concept of "length" which only works for certain vectors is generalized as the "normw" which works for all vectors. The norm of vector \mathbf{v} is denoted \|\mathbf{v}\|. The double bars are used to avoid confusion with the absolute value of the function.

Taxicab metricw (called L1 norm. See Lp space*. Sometimes called Lebesgue spaces. See also Lebesgue measurew.)
\|\mathbf{v}\| = v_1 + v_2 + v_3
Pythagoras (2)

c² = (a+b)² - 4ab/2
c² = a² + b²

In Euclidean spacew the norm (called L2 norm) doesnt depend on the choice of coordinate system. As a result, rigid objects can rotate in Euclidean space. See proof of the Pythagorean theoremw to the right. L2 is the only Hilbert space* among Lp spaces.
\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + v_3^2}
In Minkowski spacew (See Pseudo-Euclidean space*) the Spacetime intervalw is
\|s\| = \sqrt{x^2 + y^2 + z^2 + (cti)^2}
In complex space* the most common norm of an n dimensional vector is obtained by treating it as though it were a regular real valued 2n dimensional vector in Euclidean space
\left\| \boldsymbol{z} \right\| = \sqrt{z_1 \bar z_1 + \cdots + z_n \bar z_n}
A Banach space* is a normed vector space* that is also a complete metric spacew (there are no points missing from it).
Tangent bundle

Tangent bundle of a circle

A manifoldw \mathbf{M} is a type of topological spacew in which each point has an infinitely small neighbourhoodw that is homeomorphicw to Euclidean spacew. A manifold is locally, but not globally, Euclidean. A Riemannian metric* on a manifold allows distances and angles to be measured.

A Tangent space* \mathbf{T}_p \mathbf{M} is the set of all vectors tangent to \mathbf{M} at point p.
Informally, a tangent bundle* \mathbf{TM} (red cylinder in image to the right) on a differentiable manifold \mathbf{M} (blue circle) is obtained by joining all the tangent spaces* (red lines) together in a smooth and non-overlapping manner.[3] The tangent bundle always has twice as many dimensions as the original manifold.
A vector bundle* is the same thing minus the requirement that it be tangent.
A fiber bundle* is the same thing minus the requirement that the fibers be vector spaces.
The cotangent bundle* (Dual bundle*) of a differentiable manifold is obtained by joining all the cotangent spaces* (pseudovector spaces).
The cotangent bundle always has twice as many dimensions as the original manifold.
Sections of that bundle are known as differential one-formsw.
Circle as Lie group

The circle of center 0 and radius 1 in the complex plane is a Lie group with complex multiplication.

A Lie group* is a group that is also a finite-dimensional real smooth manifold, in which the group operation is multiplication rather than addition.[4] n×n invertible matrices* (See below) are a Lie group.

A Lie algebra* (See Infinitesimal transformation*) is a local or linearized version of a Lie group.
The Lie derivativew generalizes the Lie bracketw which generalizes the wedge productw which is a generalization of the cross productw which only works in 3 dimensions.

Back to top

Multiplication of vectors

Multiplication can be generalized to allow for multiplication of vectors in 3 different ways:

Dot productw (a Scalarw): 

\mathbf{u} \cdot \mathbf{v} =

\| \mathbf{u} \|\ \| \mathbf{v}\| \cos(\theta) =

u_1 v_1 + u_2 v_2 + u_3 v_3

\mathbf{u}\cdot\mathbf{v} =


\mathbf{e_1} \\

u_2 \mathbf{e_2} \\

u_3 \mathbf{e_3}


\begin{bmatrix}v_1 \mathbf{e_1} & v_2 \mathbf{e_2} & v_3 \mathbf{e_3}

\end{bmatrix} =

\begin{bmatrix}u_1 v_1 + u_2 v_2 + u_3 v_3

Strangely, only parallel components multiply.
The dot product can be generalized to the bilinear formw \beta(\mathbf{u,v}) = u^T Av = scalar where A is an (0,2) tensor. (For the dot product A is the identity tensor).
Two vectors are orthogonal if \beta(\mathbf{u,v}) = 0.
A bilinear form is symmetric if \beta(\mathbf{u,v}) = \beta(\mathbf{v,u})
Its associated quadratic form* is Q(\mathbf{x}) = \beta(\mathbf{x,x}).
In Euclidean space \|\mathbf{v}\|^2 = \mathbf{v}\cdot\mathbf{v}= Q(\mathbf{x}).
The inner productw is a generalization of the dot product to complex vector space. \langle u,v\rangle=u\cdot \bar{v}=\langle v \mid u\rangle
The 2 vectors are called "bra" and "ket"*.
A Hilbert space* is an inner product spacew that is also a Complete metric spacew.
The inner product can be generalized to (a sesquilinear formw)
A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × VC such that[5] h(w,z) = \overline{h(z, w)}.
A is a Hermitian operator* iffw \langle v \mid A u\rangle = \langle A v \mid u\rangle. Often written as \langle v \mid A \mid u\rangle.
The curl operator, \nabla\times is Hermitian.

Outer productw (a tensorw called a dyadicw):\mathbf{u} \otimes \mathbf{v}.

As one would expect, every component of one vector multipies with every component of the other vector.


\mathbf{u} \otimes \mathbf{v} =


u_1 \mathbf{e_1} \\

u_2 \mathbf{e_2} \\

u_3 \mathbf{e_3}



v_1 \mathbf{e_1} & v_2 \mathbf{e_2} & v_3 \mathbf{e_3}

\end{bmatrix} =


{\color{red} u_1 v_1 \mathbf{e_1} \otimes \mathbf{e_1} } & 
{\color{blue} u_1 v_2 \mathbf{e_1} \otimes \mathbf{e_2} } & 
{\color{blue} u_1 v_3 \mathbf{e_1} \otimes \mathbf{e_3} } \\

{\color{blue} u_2 v_1 \mathbf{e_2} \otimes \mathbf{e_1} } & 
{\color{red} u_2 v_2 \mathbf{e_2} \otimes \mathbf{e_2} } & 
{\color{blue} u_2 v_3 \mathbf{e_2} \otimes \mathbf{e_3} } \\

{\color{blue} u_3 v_1 \mathbf{e_3} \otimes \mathbf{e_1} } & 
{\color{blue} u_3 v_2 \mathbf{e_3} \otimes \mathbf{e_2} } & 
{\color{red} u_3 v_3 \mathbf{e_3} \otimes \mathbf{e_3} }


Taking the dot product of uv and any vector x (See Visualization of Tensor multiplicationw) causes the components of x not pointing in the direction of v to become zero. What remains is then rotated from v to u.
A rotation matrix can be constructed by summing three outer products. The first two sum to form a bivector. The third one rotates the axis of rotation zero degrees. \mathbf{e}_1 \otimes \mathbf{e}_2 - \mathbf{e}_2 \otimes \mathbf{e}_1 + \mathbf{e}_3 \otimes \mathbf{e}_3
\mathbf{e}_1 \otimes \mathbf{e}_2 \cdot \mathbf{e}_2 = \mathbf{e}_1
The Tensor productw generalizes the outer productw.
Exterior calc cross product

A unit vector and a unit bivector are shown in red

Wedge productw (a simple bivectorw): \mathbf{u} \wedge \mathbf{v} = \mathbf{u} \otimes \mathbf{v} - \mathbf{v} \otimes \mathbf{u} = [\overline{\mathbf{u}}, \overline{\mathbf{v}}]

The wedge product is also called the exterior productw (sometimes mistakenly called the outer product).
The term "exterior" comes from the exterior product of two vectors not being a vector.
Just as a vector has length and direction so a bivector has an area and an orientation.
In three dimensions \mathbf{u} \wedge \mathbf{v} is a pseudovectorw and its dualw is the cross productw. \overline{\mathbf{u} \wedge \mathbf{v}} = \mathbf{u} \times \mathbf{v}

\mathbf{a \wedge b \wedge c =

a \otimes b \otimes c -

a \otimes c \otimes b +

c \otimes a \otimes b -

c \otimes b \otimes a +

b \otimes c \otimes a -

b \otimes a \otimes c}
Exterior calc triple product

The magnitude of a∧b∧c equals the volume of the parallelepiped.

The triple productw a∧b∧c is a trivector which is a 3rd degree tensor.
In 3 dimensions a trivector is a pseudoscalar so in 3 dimensions every trivector can be represented as a scalar times the unit trivector. See Levi-Civita symbolw
\mathbf{a}\cdot(\mathbf{b}\times \mathbf{c}) \cdot \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3
The Matrix commutatorw generalizes the wedge product.
[A_1, A_2] = A_1A_2 - A_2A_1

The dualw of vector a is bivector ā:

\overline{\mathbf{a}} \quad\stackrel{\rm def}{=} \quad\begin{bmatrix}\,\,0&\!-a_3&\,\,\,a_2\\\,\,\,a_3&0&\!-a_1\\\!-a_2&\,\,a_1&\,\,0\end{bmatrix}

Back to top


Tensor components explained

Multiplying a tensor and a vector results in a new vector that not only might have a different magnitude but might even point in a completely different direction:

    x_1 & y_1 & z_1 \\
    x_2 & y_2 & z_2 \\
    x_3 & y_3 & z_3 \\
  \end{bmatrix} = 

Unless the tensor is the identity tensor in which case the vector is completely unchanged:

    1 & 0 & 0 \\
    0 & 1 & 0 \\
    0 & 0 & 1 \\
  \end{bmatrix} = 

Some special cases:

    x_1 & y_1 & z_1 \\
    x_2 & y_2 & z_2 \\
    x_3 & y_3 & z_3 \\
  \end{bmatrix} = 
    x_1 & y_1 & z_1 \\
    x_2 & y_2 & z_2 \\
    x_3 & y_3 & z_3 \\
  \end{bmatrix} = 
    x_1 & y_1 & z_1 \\
    x_2 & y_2 & z_2 \\
    x_3 & y_3 & z_3 \\
  \end{bmatrix} = 

Complex numbers can be used to represent and perform rotationsw but only in 2 dimensions.

Tensorsw, on the other hand, can be used in any number of dimensions to represent and perform rotations and other linear transformationsw. See Visualization of Tensor multiplicationw for a full explanation of how to multiply tensors and vectors.

Any affine transformationw is equivalent to a linear transformation followed by a translationw of the origin. (The originw is always a fixed point for any linear transformation.) "Translation" is just a fancy word for "move".

Just as a vector is a sum of unit vectors multiplied by constants so a tensor is a sum of unit dyadics (See outer product above) multiplied by constants. Each dyadic can be thought of as a plane having an orientation and magnitude.

The order or degree of the tensor is the dimension of the tensor which is the total number of indices required to identify each component uniquely.[6] A vector is a 1st-order tensor.

A simple tensor is a tensor that can be written as a product of tensors of the form T=a\otimes b\otimes\cdots\otimes d. (See Outer Product below.) The rank of a tensor T is the minimum number of simple tensors that sum to T.[7]

Back to top

Linear groups

A square matrixw of order n is an n-by-n matrix. Any two square matrices of the same order can be added and multiplied. A matrix is invertible if and only if its determinant is nonzero.

    a_1 & b_1 \\
    a_2 & b_2 \\ 
    x_1 & x_2 \\
    y_1 & y_2 \\
  \end{bmatrix} =
    a_1x_1+b_1y_1 & a_1x_2+b_1y_2 \\
    a_2x_1+b_2y_1 & a_2x_2+b_2y_2 \\

GLn(F) or GL(n, F), or simply GL(n) is the Lie group* of n×n invertible matrices with entries from the field F. The group GL(n, F) and its subgroups are often called linear groups or matrix groups.

SL(n, F) or SLn(F), is the subgroup* of GL(n, F) consisting of matrices with a determinantw of 1.
U(n), the Unitary group of degree n is the groupw of n × n unitary matricesw. (More general unitary matrices may have complex determinants with absolute value 1, rather than real 1 in the special case.) The group operation is matrix multiplicationw.[8]
SU(n), the special unitary group of degree n, is the Lie group* of n×n unitary matricesw with determinantw 1.

Back to top

Symmetry groups

Aff(n,K): the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.

E(n): rotations, reflections, and translations.
O(n): rotations, reflections
SO(n): rotations
so(3) is the Lie algebra of SO(3) and consists of all skew-symmetricw 3 × 3 matrices.

Back to top


In 4 spatial dimensions a rigid object can rotate in 2 different ways simultaneously*.


Stereographic projection of four-dimensional Tesseractw in double rotation

See also: Hypersphere of rotations*
See Rotation group SO(3)*, Special unitary group*, Plate trick*, Spin representation*, Spin group*, Pin group*, Spinor*, Clifford algebraw, Indefinite orthogonal group*, Root system*, Bivectorsw, Curlw

From Wikipedia:Rotation group SO(3):

Consider the solid ball in R3 of radius π. For every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The two rotations through π and through −π are the same. So we identify* (or "glue together") antipodal points* on the surface of the ball.

The ball with antipodal surface points identified is a smooth manifold*, and this manifold is diffeomorphic* to the rotation group. It is also diffeomorphic to the real 3-dimensional projective space* RP3, so the latter can also serve as a topological model for the rotation group.

These identifications illustrate that SO(3) is connected* but not simply connected*. As to the latter, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open".

Belt Trick

A set of belts can be continuously rotated without becoming twisted or tangled. The cube must go through two full rotations for the system to return to its initial state. See Tangloids*.

Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The Balinese plate trick* and similar tricks demonstrate this practically.

The same argument can be performed in general, and it shows that the fundamental group* of SO(3) is cyclic groupw of order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects known as spinors*, and is an important tool in the development of the spin-statistics theorem*.

The universal cover* of SO(3) is a Lie group* called Spin(3)*. The group Spin(3) is isomorphic to the special unitary group* SU(2); it is also diffeomorphic to the unit 3-sphere* S3 and can be understood as the group of versors* (quaternionsw with absolute valuew 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotation*. The map from S3 onto SO(3) that identifies antipodal points of S3 is a surjective* homomorphism* of Lie groups, with kernel* {±1}. Topologically, this map is a two-to-one covering map*. (See the plate trick*.)

Back to top

Spin group

From Wikipedia:Spin group:

The spin group Spin(n)[9][10] is the double cover* of the special orthogonal group* SO(n) = SO(n, R), such that there exists a short exact sequence* of Lie groups* (with n ≠ 2)

1 \to \mathrm{Z}_2 \to \operatorname{Spin}(n) \to \operatorname{SO}(n) \to 1.

As a Lie group, Spin(n) therefore shares its dimension*, n(n − 1)/2, and its Lie algebra* with the special orthogonal group.

For n > 2, Spin(n) is simply connected* and so coincides with the universal cover* of SO(n)*.

The non-trivial element of the kernel is denoted −1, which should not be confused with the orthogonal transform of reflection through the origin*, generally denoted −I .

Spin(n) can be constructed as a subgroup* of the invertible elements in the Clifford algebraw Cl(n). A distinct article discusses the spin representations*.

From Wikipedia:Spinor:

a spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360° (see picture). This property characterizes spinors. It is also possible to associate a substantially similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913.

Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966, first sentence of the Introduction section of the beginning of the book (before the page numbers start): "Spinors were first used under that name, by physicists, in the field of Quantum Mechanics. In their most general form, spinors were discovered in 1913 by the author of this work, in his investigations on the linear representations of simple groups*; they provide a linear representation of the group of rotations in a space with any number n of dimensions, each spinor having 2^\nu components where n = 2\nu+1 or 2\nu." The star (*) refers to Cartan 1913.

(Note: \nu is the number of simultaneous independent rotations* an object can have in n dimensions.)

In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. More precisely, it is the fermions of spin-1/2 that are described by spinors, which is true both in the relativistic and non-relativistic theory. The wavefunction of the non-relativistic electron has values in 2 component spinors transforming under three-dimensional infinitesimal rotations. The relativistic Dirac equation* for the electron is an equation for 4 component spinors transforming under infinitesimal Lorentz transformations for which a substantially similar theory of spinors exists.

Spinors form a vector space, usually over the complex numbers, equipped with a linear group representation of the spin group that does not factor through a representation of the group of rotations (see diagram). The spin group is the group of rotations keeping track of the homotopy class. Spinors are needed to encode basic information about the topology of the group of rotations because that group is not simply connected, but the simply connected spin group is its double cover. So for every rotation there are two elements of the spin group that represent it. Geometric vectors and other tensors cannot feel the difference between these two elements, but they produce opposite signs when they affect any spinor under the representation.

Back to top


External links:

A brief introduction to geometric algebra
A brief introduction to Clifford algebra
The Construction of Spinors in Geometric Algebra
Functions of Multivector Variables

From Wikipedia:Multivector:

The wedge productw operation (See Exterior algebraw) used to construct multivectors is linear, associative and alternating, which reflect the properties of the determinant. This means for vectors u, v and w in a vector space V and for scalars α, β, the wedge product has the properties,

  • Linear:  \mathbf{u}\wedge(\alpha\mathbf{v}+\beta\mathbf{w})=\alpha\mathbf{u}\wedge\mathbf{v}+\beta\mathbf{u}\wedge\mathbf{w};
  • Associative:  (\mathbf{u}\wedge\mathbf{v})\wedge\mathbf{w}=\mathbf{u}\wedge(\mathbf{v}\wedge\mathbf{w})=\mathbf{u}\wedge\mathbf{v}\wedge\mathbf{w};
  • Alternating:  \mathbf{u}\wedge\mathbf{v}=-\mathbf{v}\wedge\mathbf{u}, \quad\mathbf{u}\wedge\mathbf{u}=0.

However the wedge product is not invertible because many different pairs of vectors can have the same wedge product.

The product of p vectors, (\mathbf{v_1}\wedge\dots\wedge\mathbf{v_n}), is called a grade p multivector, or a p-vector. The maximum grade of a multivector is the dimension of the vector space V.

The linearity of the wedge product allows a multivector to be defined as the linear combination of basis multivectors. There are (n
) basis p-vectors in an n-dimensional vector space.[11]

W. K. Clifford* combined multivectors with the inner productw defined on the vector space, in order to obtain a general construction for hypercomplex numbers that includes the usual complex numbers and Hamilton's quaternionsw.[12][13]

The Clifford product between two vectors is linear and associative like the wedge product. But unlike the wedge product the Clifford product is invertible.

Clifford's relation preserves the alternating property for the product of vectors that are perpendicular. But in contrast to the wedge product, the Clifford product of a vector with itself is no longer zero. To see why consider the square (quadratic form*) of a single vector:

Q(c) = c^2 = \langle c , c \rangle = (a\mathbf{e_1} + b\mathbf{e_2})^2
Q(c) = aa\mathbf{e_1}\mathbf{e_1} + bb\mathbf{e_2}\mathbf{e_2} + ab\mathbf{e_1}\mathbf{e_2} + ba\mathbf{e_2}\mathbf{e_1}
Q(c) = a^2\mathbf{e_1}\mathbf{e_1} + b^2\mathbf{e_2}\mathbf{e_2} + ab(\mathbf{e_1}\mathbf{e_2} + \mathbf{e_2}\mathbf{e_1})

From the Pythagorean theorem we know that:

c^2 = a^2 + b^2 + ab(0) = Scalar

Therefore Clifford deduced that:

\mathbf{e_1}\mathbf{e_1} = +1

\mathbf{e_2}\mathbf{e_2} = +1

\mathbf{e_1}\mathbf{e_2} = -\mathbf{e_2}\mathbf{e_1}

And therefore that:

(\mathbf{e_1}\mathbf{e_2})^2 = \mathbf{e_1}\mathbf{e_2}\mathbf{e_1}\mathbf{e_2} = -\mathbf{e_1}\mathbf{e_1}\mathbf{e_2}\mathbf{e_2} = -1

\mathbf{e_1}\mathbf{e_2} = i = Bivector?

And i, as we already know, has the effect of rotating complex numbers.

For any 2 arbitrary vectors:

fd = force*distance
fd = (f_1\mathbf{e_1} + f_2\mathbf{e_2})(d_1\mathbf{e_1} + d_2\mathbf{e_2})
fd = f_1d_1\mathbf{e_1}\mathbf{e_1} + f_2d_2\mathbf{e_2}\mathbf{e_2} + f_1d_2\mathbf{e_1}\mathbf{e_2} + f_2d_1\mathbf{e_2}\mathbf{e_1}
fd = f_1d_1\mathbf{e_1}\mathbf{e_1} + f_2d_2\mathbf{e_2}\mathbf{e_2} + f_1d_2\mathbf{e_1}\mathbf{e_2} - f_2d_1\mathbf{e_1}\mathbf{e_2}
fd = f_1d_1\mathbf{e_1}\mathbf{e_1} + f_2d_2\mathbf{e_2}\mathbf{e_2} + (f_1d_2 - f_2d_1)\mathbf{e_1}\mathbf{e_2}

Applying Cliffords deductions we get:

fd = f_1d_1 + f_2d_2 + (f_1d_2 - f_2d_1)\mathbf{e_1} \wedge \mathbf{e_2}
fd = Energy + Torque
fd = {\color{red} f \cdot d} + {\color{blue} f \wedge d}
fd = {\color{red} Scalar} + {\color{blue} Bivector} = Multivector

For comparison here is the outer product of the same 2 vectors:

f \otimes d =


f_1 \mathbf{e}_1 \\

f_2 \mathbf{e}_2 \\



d_1 \mathbf{e}_1,

d_2 \mathbf{e}_2

\end{bmatrix} =


{\color{red}  f_1 d_1 \mathbf{e}_{11} } &

{\color{blue} f_1 d_2 \mathbf{e}_{12} } \\

{\color{blue} f_2 d_1 \mathbf{e}_{21} } &

{\color{red}  f_2 d_2 \mathbf{e}_{22} } 

\end{bmatrix} (See divergence, curl, & gradient below)

This particular Clifford algebra is known as Cl2,0. The subscript 2 indicates that the 2 basis vectors are square roots of +1. See Metric signature*. If we had used c^2 = -a^2 -b^2 then the result would have been Cl0,2.

From Wikipedia:Clifford algebra:

Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form:

Q(v) = v_1^2 + \cdots + v_p^2 - v_{p+1}^2 - \cdots - v_{p+q}^2 ,

where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature* of the quadratic form. The real vector space with this quadratic form is often denoted Rp,q. The Clifford algebra on Rp,q is denoted Cℓp,q(R). The symbol Cℓn(R) means either Cℓn,0(R) or Cℓ0,n(R) depending on whether the author prefers positive-definite or negative-definite spaces.

A standard basisw {ei} for Rp,q consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. The algebra Cℓp,q(R) will therefore have p vectors that square to +1 and q vectors that square to −1.

Note that Cℓ0,0(R) is naturally isomorphic to R since there are no nonzero vectors. Cℓ0,1(R) is a two-dimensional algebra generated by a single vector e1 that squares to −1, and therefore is isomorphic as an algebra (but not as a superalgebra*) to C, the field of complex numbers. The algebra Cℓ0,2(R) is a four-dimensional algebra spanned by {1, e1, e2, e1e2}. The latter three elements square to −1 and all anticommute, and so the algebra is isomorphic to the quaternions H. Cℓ0,3(R) is an 8-dimensional algebra isomorphic to the direct sum* HH called split-biquaternions*.

From Wikipedia:Spacetime algebra:

Spacetime algebra* (STA) is a name for the Clifford algebraw Cl3,1(R), or equivalently the geometric algebraw G(M4), which can be particularly closely associated with the geometry of special relativityw and relativistic spacetimew. See also Algebra of physical space*.

The spacetime algebra may be built up from an orthogonal basis of one time-like vector \gamma_0 and three space-like vectors, \{\gamma_1, \gamma_2, \gamma_3\}, with the multiplication rule

 \gamma_\mu \gamma_\nu + \gamma_\nu \gamma_\mu = 2 \eta_{\mu \nu}

where \eta_{\mu \nu} is the Minkowski metricw with signature (+ + + −).

Thus, \gamma_0^2 = {-1}, \gamma_1^2 = \gamma_2^2 = \gamma_3^2 = {+1}, otherwise \gamma_\mu \gamma_\nu = - \gamma_\nu \gamma_\mu.

The basis vectors \gamma_k share these properties with the Dirac matrices*, but no explicit matrix representation need be used in STA.

Back to top


From Wikipedia:Geometric algebra

The inverse of a vector is:

 v^{-1} = \frac{1}{v} = \frac{v}{vv} = \frac{v}{v \cdot v + v \wedge v} = \frac{v}{v \cdot v}

The projection of v onto a (or the parallel part) is

 v_{\| a} = (v \cdot a)a^{-1}

and the rejection of v from a (or the orthogonal part) is

 v_{\perp a} = v - v_{\| a} = (v\wedge a)a^{-1} .

The reflection v' of a vector v along a vector a, or equivalently across the hyperplane orthogonal to a, is the same as negating the component of a vector parallel to a. The result of the reflection will be

v' = {-v_{\| a} + v_{\perp a}} = {-(v \cdot a)a^{-1} + (v \wedge a)a^{-1}}
= {(-a \cdot v - a \wedge v)a^{-1}}
= -ava^{-1}

If a is a unit vector then a=a^{-1} and v' = -ava

If we have a product of vectors R = a_1a_2 \cdots a_r then we denote the reverse as

R^\dagger = (a_1a_2\cdots a_r)^\dagger = a_r\cdots a_2 a_1.

Any rotation is equivalent to 2 reflections.

v'' = bv'b = bavab = RvR^\dagger

R is called a Rotor

R = ba = b \cdot a + b \wedge a = Scalar + Bivector

Back to top


From Wikipedia:Quaternion:

v^{\prime\prime} = \sigma_2 \sigma_1 \, v \, \sigma_1 \sigma_2

corresponds to a rotation of 180° in the plane containing σ1 and σ2.

This is very similar to the corresponding quaternion formula,

v^{\prime\prime} = -\mathbf{k}\, v\, \mathbf{k}.

In fact, the two are identical, if we make the identification

\mathbf{k} = \sigma_2 \sigma_1, \mathbf{i} = \sigma_3 \sigma_2, \mathbf{j} = \sigma_1 \sigma_3

and it is straightforward to confirm that this preserves the Hamilton relations

\mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{i} \mathbf{j} \mathbf{k} = -1.

In this picture, quaternions correspond not to vectors but to bivectorsw – quantities with magnitude and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbersw becomes clearer, too: in 2D, with two vector directions σ1 and σ2, there is only one bivector basis element σ1σ2, so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ1σ2, σ2σ3, σ3σ1, so three imaginaries.

The usefulness of quaternions for geometrical computations can be generalised to other dimensions, by identifying the quaternions as the even part Cℓ+3,0(R) of the Clifford algebraw Cℓ3,0(R).

There are at least two ways of representing quaternions as matricesw in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplicationw.

Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as

\begin{bmatrix}a+bi & c+di \\ -c+di & a-bi \end{bmatrix}.

Using 4 × 4 real matrices, that same quaternion can be written as

 a & -b & -c & -d \\ 
 b & a & -d & c \\
 c & d & a & -b \\
 d & -c & b & a 
\end{bmatrix}= a
 1 & 0 & 0 & 0 \\ 
 0 & 1 & 0 & 0 \\
 0 & 0 & 1 & 0 \\
 0 & 0 & 0 & 1 
+ b
 0 & -1 & 0 & 0 \\ 
 1 & 0 & 0 & 0 \\
 0 & 0 & 0 & -1 \\
 0 & 0 & 1 & 0 
+ c
 0 & 0 & -1 & 0 \\ 
 0 & 0 & 0 & 1 \\
 1 & 0 & 0 & 0 \\
 0 & -1 & 0 & 0 
+ d
 0 & 0 & 0 & -1 \\ 
 0 & 0 & -1 & 0 \\
 0 & 1 & 0 & 0 \\
 1 & 0 & 0 & 0 

Back to top


External link:An introduction to spinors

Spinors may be regarded as non-normalised rotors in which the reverse rather than the inverse is used in the sandwich product.[14]

From Wikipedia:Clifford_algebra#Spinors:

Clifford algebras Cℓp,q(C), with p + q = 2n even, are matrix algebras which have a complex representation of dimension 2n. By restricting to the group Pinp,q(R) we get a complex representation of the Pin group of the same dimension, called the spin representation*. If we restrict this to the spin group Spinp,q(R) then it splits as the sum of two half spin representations (or Weyl representations) of dimension 2n−1.

If p + q = 2n + 1 is odd then the Clifford algebra Cℓp,q(C) is a sum of two matrix algebras, each of which has a representation of dimension 2n, and these are also both representations of the Pin group Pinp,q(R). On restriction to the spin group Spinp,q(R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2n.

More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras*: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see the article on spinors*.

From Wikipedia:Tensor#Spinors:

When changing from one orthonormal basis* (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected* (see orientation entanglement* and plate trick*): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.[15] A spinor* is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.[16][17]

Succinctly, spinors are elements of the spin representation* of the rotation group, while tensors are elements of its tensor representations*. Other classical groups* have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.

Back to top

Pauli matrices

From Wikipedia:Pauli matrices

The Pauli matrices are a set of "gamma" matrices in dimension 3 with metric of Euclidean signature (3,0). The Pauli matrices are a set of three 2 × 2 complexw matricesw which are Hermitianw and unitaryw.[18] They are

  \sigma_0 = I &=
    \end{pmatrix} \\
  \sigma_1 = \sigma_x &=
    \end{pmatrix} \\
  \sigma_2 = \sigma_y &=
    \end{pmatrix} \\
  \sigma_3 = \sigma_z &=
    \end{pmatrix} \,.
\sigma_1^2 = \sigma_2^2 = \sigma_3^2 = -i\sigma_1 \sigma_2 \sigma_3 = \begin{pmatrix} 1&0\\0&1\end{pmatrix} = I

commutationw relations:

    \left[\sigma_1, \sigma_2\right] &= 2i\sigma_3 \, \\
    \left[\sigma_2, \sigma_3\right] &= 2i\sigma_1 \, \\
    \left[\sigma_3, \sigma_1\right] &= 2i\sigma_2 \, \\
    \left[\sigma_1, \sigma_1\right] &= 0\, \\

anticommutation* relations:

  \left\{\sigma_1, \sigma_1\right\} &= 2I\, \\
  \left\{\sigma_1, \sigma_2\right\} &= 0\,.\\

Adding the commutator to the anticommutator gives:

(\vec{a} \cdot \vec{\sigma})(\vec{b} \cdot \vec{\sigma}) = (\vec{a} \cdot \vec{b}) \, I + i ( \vec{a} \times \vec{b} )\cdot \vec{\sigma}

If  i is identified with the pseudoscalar  \sigma_x \sigma_y \sigma_z then the right hand side becomes  a \cdot b + a \wedge b which is also the definition for the product of two vectors in geometric algebra.

Exponential of a Pauli vector:

e^{i a(\hat{n} \cdot \vec{\sigma})} = I\cos{a} + i (\hat{n} \cdot \vec{\sigma}) \sin{a}

The real linear span of {I, 1, 2, 3} is isomorphic to the real algebra of quaternionsw . The isomorphism from to this set is given by the following map (notice the reversed signs for the Pauli matrices):

  1 \mapsto I, \quad
  i \mapsto - i \sigma_1, \quad
  j \mapsto - i \sigma_2, \quad
  k \mapsto - i \sigma_3.

Quaternions form a division algebra*—every non-zero element has an inverse—whereas Pauli matrices do not.

Back to top

Maxwell's equations

From Wikipedia:Mathematical descriptions of the electromagnetic field

Analogous to the tensor formulation, two objects, one for the field and one for the current, are introduced. In geometric algebraw (GA) these are multivectorsw. The field multivector, known as the Riemann–Silberstein vector*, is

 \bold{F} = \bold{E} + Ic\bold{B} = E^k\sigma_k + IcB^k\sigma_k

and the current multivector is

 c \rho - \bold{J} = c \rho - J^k\sigma_k

where, in the algebra of physical space* (APS) C\ell_{3,0}(\mathbb{R}) with the vector basis \{\sigma_k\}. The unit pseudoscalar* is I=\sigma_1\sigma_2\sigma_3 (assuming an orthonormal basis*). Orthonormal basis vectors share the algebra of the Pauli matrices*, but are usually not equated with them. After defining the derivative

 \boldsymbol{\nabla} = \sigma^k \partial_k

Maxwell's equations are reduced to the single equation[19]

 \left(\frac{1}{c}\dfrac{\partial }{\partial t} + \boldsymbol{\nabla}\right)\bold{F} = \mu_0 c (c \rho - \bold{J}).

In three dimensions, the derivative has a special structure allowing the introduction of a cross product:

 \boldsymbol{\nabla}\bold{F} = \boldsymbol{\nabla} \cdot \bold{F} + \boldsymbol{\nabla} \wedge \bold{F} = \boldsymbol{\nabla} \cdot \bold{F} + I \boldsymbol{\nabla} \times \bold{F}

from which it is easily seen that Gauss's law is the scalar part, the Ampère–Maxwell law is the vector part, Faraday's law is the pseudovector part, and Gauss's law for magnetism is the pseudoscalar part of the equation. After expanding and rearranging, this can be written as

\left( \boldsymbol{\nabla} \cdot \mathbf{E} - \frac{\rho}{\epsilon_0} \right)- c \left( \boldsymbol{\nabla} \times \mathbf{B} - \mu_0 \epsilon_0 \frac{\partial {\mathbf{E}}}{\partial {t}} - \mu_0 \mathbf{J} \right)+ I \left( \boldsymbol{\nabla} \times \mathbf{E} + \frac{\partial {\mathbf{B}}}{\partial {t}} \right)+ I c \left( \boldsymbol{\nabla} \cdot \mathbf{B} \right)= 0

We can identify APS as a subalgebra of the spacetime algebra* (STA) C\ell_{1,3}(\mathbb{R}), defining \sigma_k=\gamma_k\gamma_0 and I=\gamma_0\gamma_1\gamma_2\gamma_3. The \gamma_\mus have the same algebraic properties of the gamma matrices* but their matrix representation is not needed. The derivative is now

\nabla = \gamma^\mu \partial_\mu.

The Riemann–Silberstein becomes a bivector

F = \bold{E} + Ic\bold{B} = E^1\gamma_1\gamma_0 + E^2\gamma_2\gamma_0 + E^3\gamma_3\gamma_0 -c(B^1\gamma_2\gamma_3 + B^2\gamma_3\gamma_1 + B^3\gamma_1\gamma_2),

and the charge and current density become a vector

J = J^\mu \gamma_\mu = c \rho \gamma_0 + J^k \gamma_k = \gamma_0(c \rho - J^k \sigma_k).

Owing to the identity

\gamma_0 \nabla = \gamma_0\gamma^0 \partial_0 + \gamma_0\gamma^k\partial_k = \partial_0 + \sigma^k\partial_k = \frac{1}{c}\dfrac{\partial }{\partial t} + \boldsymbol{\nabla},

Maxwell's equations reduce to the single equation

 \nabla F = \mu_0 c J.

Back to top


From Wikipedia:Function (mathematics)

Function machine2

A function f takes an input x, and returns a single output f(x). One metaphor describes the function as a "machine" or "black box" that for each input returns a corresponding output.

Graph of example function

The red curve is the graph of a function f in the Cartesian plane, consisting of all points with coordinates of the form (x, f(x)). The property of having one output for each input is represented geometrically by the fact that each vertical line (such as the yellow line through the origin) has exactly one crossing point with the curve.

In mathematics, a function[20] is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that relates each real number x to its square x2. The output of a function f corresponding to an input x is denoted by f(x) (read "f of x"). In this example, if the input is −3, then the output is 9, and we may write f(−3) = 9. Likewise, if the input is 3, then the output is also 9, and we may write f(3) = 9. (The same output may be produced by more than one input, but each input gives only one output.) The input variable(s) are sometimes referred to as the argument(s) of the function.

Back to top

Euclids "common notions"

From Wikipedia:Euclidean geometry:

Things that do not differ from one another are equal to one another


Things that are equal to the same thing are also equal to one another

then a=c

If equals are added to equals, then the wholes are equal

then a+c=b+d

If equals are subtracted from equals, then the remainders are equal

then a-c=b-d

The whole is greater than the part.

If b≠0 then a+b>a

Back to top

Elementary algebra

From Wikipedia:Elementary algebra:

Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers.

Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (additionw, subtractionw, multiplicationw, divisionw and exponentiationw). For example,

  • Added terms are simplified using coefficients. For example, x + x + x can be simplified as 3x (where 3 is a numerical coefficient).
  • Multiplied terms are simplified using exponents. For example, x \times x \times x is represented as x^3
  • Like terms are added together,[21] for example, 2x^2 + 3ab - x^2 + ab is written as x^2 + 4ab, because the terms containing x^2 are added together, and, the terms containing ab are added together.
  • Expressions can be factored. For example, 6x^5 + 3x^2, by dividing both terms by 3x^2 can be written as 3x^2 (2x^3 + 1)

For any function f, if a=b then:

  • f(a) = f(b)
  • a + c = b + c
  • ac = bc
  • a^c = b^c

One must be careful though when squaring both sides of an equation since this can result is solutions that dont satisfy the original equation.

1 \neq -1 yet 1^2 = -1^2

A function is an even functionw if f(x) = f(-x)

A function is an odd functionw if f(x) = -f(-x)

Back to top


Triangle with notations 2



The law of cosinesw reduces to the Pythagorean theoremw when gamma=90 degrees

c^2 = a^2 + b^2 - 2ab\cos\gamma,

The law of sinesw (also known as the "sine rule") for an arbitrary triangle states:

\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = \frac{abc}{2\Delta},

where \Delta is the area of the triangle

\mbox{Area} = \Delta = \frac{1}{2}a b\sin C.

The law of tangentsw:


Back to top

Right triangles

A right triangle is a triangle with gamma=90 degrees.

For small values of x, sin x ≈ x. (If x is in radians).

SOH → sin = "opposite" / "hypotenuse"

CAH → cos = "adjacent" / "hypotenuse"

TOA → tan = "opposite" / "adjacent"


= sin A = a/c

= cos A = b/c

= tan A = a/b

\sin x = \frac{e^{ix} - e^{-ix}}{2i}, \qquad \cos x = \frac{e^{ix} + e^{-ix}}{2}, \qquad \tan x = \frac{i(e^{-ix} - e^{ix})}{e^{ix} + e^{-ix}}.

Back to top

Hyperbolic functions

See also: Hyperbolic angle*
From Wikipedia:Hyperbolic function:
Hyperbolic functions-2

A ray through the unit hyperbola* x^2 - y^2 = 1 in the point  (\cosh a, \sinh a), where a is twice the area between the ray, the hyperbola, and the x-axis. For points on the hyperbola below the x-axis, the area is considered negative (see animated versionw with comparison with the trigonometric (circular) functions).

Circular and hyperbolic angle

Circle and hyperbola tangent at (1,1) display geometry of circular functions in terms of circular sector* area u and hyperbolic functions depending on hyperbolic sector* area u.

Hyperbolic functionsw are analogs of the ordinary trigonometric, or circular, functions.

  • Hyperbolic sine:
\sinh x = \frac {e^x - e^{-x}} {2} = \frac {e^{2x} - 1} {2e^x} = \frac {1 - e^{-2x}} {2e^{-x}}.
  • Hyperbolic cosine:
\cosh x = \frac {e^x + e^{-x}} {2} = \frac {e^{2x} + 1} {2e^x} = \frac {1 + e^{-2x}} {2e^{-x}}.
  • Hyperbolic tangent:
\tanh x = \frac{\sinh x}{\cosh x} = \frac {e^x - e^{-x}} {e^x + e^{-x}} =
 = \frac{e^{2x} - 1} {e^{2x} + 1} = \frac{1 - e^{-2x}} {1 + e^{-2x}}.
  • Hyperbolic cotangent:
\coth x = \frac{\cosh x}{\sinh x} = \frac {e^x + e^{-x}} {e^x - e^{-x}} =
 = \frac{e^{2x} + 1} {e^{2x} - 1} = \frac{1 + e^{-2x}} {1 - e^{-2x}}, \qquad x \neq 0.
  • Hyperbolic secant:
\operatorname{sech} x = \frac{1}{\cosh x} = \frac {2} {e^x + e^{-x}} =
 =  \frac{2e^x} {e^{2x} + 1} = \frac{2e^{-x}} {1 + e^{-2x}}.
  • Hyperbolic cosecant:
\operatorname{csch} x = \frac{1}{\sinh x} = \frac {2} {e^x - e^{-x}} =
 = \frac{2e^x} {e^{2x} - 1} = \frac{2e^{-x}} {1 - e^{-2x}}, \qquad x \neq 0.

Back to top


See Runge's phenomenon*, Polynomial ring*, System of polynomial equations*, Rational root theorem*, Descartes' rule of signs*, and Complex conjugate root theorem*
From Wikipedia:Polynomial:

A polynomialw can always be written in the form

polynomial = Z(x) = a_0 + a_1 x + a_2 x^2 + \dotsb + a_{n-1}x^{n-1} + a_n x^n

where a_0, \ldots, a_n are constants called coefficients and n is the degreew of the polynomial.

A linear polynomial* is a polynomial of degree one.

Each individual term* is the product of the coefficient* and a variable raised to a nonnegative integer power.

A monomial* has only one term.
A binomial* has 2 terms.

Fundamental theorem of algebra*:

Every single-variable, degree n polynomial with complex coefficients has exactly n complex rootsw.
However, some or even all of the roots might be the same number.
A root (or zero) of a function is a value of x for which Z(x)=0.
Z(x) = a_n(x - z_1)(x - z_2)\dotsb(x - z_n)
If Z(x) = (x - z_1)(x - z_2)^k then z2 is a root of multiplicity* k.[22] z2 is a root of multiplicity k-1 of the derivative (Derivative is defined below) of Z(x).
If k=1 then z2 is a simple root.
The graph is tangent to the x axis at the multiple roots of f and not tangent at the simple roots.
The graph crosses the x-axis at roots of odd multiplicity and bounces off (not goes through) the x-axis at roots of even multiplicity.
Near x=z2 the graph has the same general shape as A(x - z_2)^k
The roots of the formula ax^2+bx+c=0 are given by the Quadratic formulaw:
x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}. See Completing the squarew
ax^2+bx+c = a(x+\frac{b}{2a})^2+c-\frac{b^2}{4a} = a(x-h)^2+k
This is a parabola shifted to the right h units, stretched by a factor of a, and moved upward k units.
k is the value at x=h and is either the maximum or the minimum value.

(x+y)^n = {n \choose 0}x^n y^0 + {n \choose 1}x^{n-1}y^1 + {n \choose 2}x^{n-2}y^2 + \cdots + {n \choose n-1}x^1 y^{n-1} + {n \choose n}x^0 y^n,

Where \binom{n}{k} = \frac{n!}{k! (n-k)!}. See Binomial coefficientw

x^2 - y^2 = (x + y)(x - y)

x^2 + y^2 = (x + yi)(x - yi)

The polynomial remainder theoremw states that the remainder of the division of a polynomial Z(x) by the linear polynomial x-a is equal to Z(a). See Ruffini's rule*.

Determining the value at Z(a) is sometimes easier if we use Horner's method* (synthetic division*) by writing the polynomial in the form

Z(x) = a_0 + x(a_1 + x(a_2 + \cdots + x(a_{n-1} + x(a_n)))).

A monic polynomial* is a one variable polynomial in which the leading coefficient is equal to 1.

a_0 + a_1x + a_2x^2 + \cdots + a_{n-1}x^{n-1} + 1x^n

Back to top

Rational functions

A rational function* is a function of the form

f(x) = k{(x - z_1)(x - z_2)\dotsb(x - z_n) \over (x - p_1)(x - p_2)\dotsb(x - p_m)} = {Z(x) \over P(x)}

It has n zerosw and m polesw. A pole is a value of x for which |f(x)| = infinity.

The vertical asymptotesw are the poles of the rational function.
If n<m then f(x) has a horizontal asymptote at the x axis
If n=m then f(x) has a horizontal asymptote at k.
If n>m then f(x) has no horizontal asymptote.
See also Wikipedia:Asymptote#Oblique_asymptotes
Given two polynomials Z(x) and P(x) = (x-p_1)(x-p_2) \cdots (x-p_m), where the pi are distinct constants and deg Z < m, partial fractionsw are generally obtained by supposing that
\frac{Z(x)}{P(x)} = \frac{c_1}{x-p_1} + \frac{c_2}{x-p_2} + \cdots + \frac{c_m}{x-p_m}
and solving for the ci constants, by substitution, by equating the coefficients* of terms involving the powers of x, or otherwise.
(This is a variant of the method of undetermined coefficients*.)[23]
If the degree of Z is not less than m then use long division to divide P into Z. The remainder then replaces Z in the equation above and one proceeds as before.
If P(x) = (x-p)^m then \frac{Z(x)}{P(x)} = \frac{c_1}{(x-p)} + \frac{c_2}{(x-p)^2} + \cdots + \frac{c_m}{(x-p)^m}

A Generalized hypergeometric series* is given by

\sum_{x=0} c_x where c0=1 and {c_{x+1} \over c_x} = {Z(x) \over P(x)} = f(x)

The function f(x) has n zeros and m poles.

Basic hypergeometric series*, or hypergeometric q-series, are q-analogue* generalizations of generalized hypergeometric series.[24]
Roughly speaking a q-analog* of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as q → 1[25]
We define the q-analog of n, also known as the q-bracket or q-number of n, to be
[n]_q=\frac{1-q^n}{1-q} = q^0 + q^1 + q^2 + \ldots + q^{n - 1}
one may define the q-analog of the factorialw, known as the q-factorial*, by

[n]_q! = [1]_q  \cdot [2]_q \cdots [n-1]_q  \cdot [n]_q
Elliptic hypergeometric series* are generalizations of basic hypergeometric series.
An elliptic function is a meromorphic function that is periodic in two directions.

A generalized hypergeometric function* is given by

F(x) = {}_nF_m(z_1,...z_n;p_1,...p_m;x) = \sum_{y=0} c_y x^y

So for ex (see below) we have:

 c_y = \frac{1}{y!}, \qquad \frac{c_{y+1}}{c_y} = \frac{1}{y+1}.

Back to top

Integration and differentiation

1 over x squared integral2

Force • distance = energy

See also: Hyperreal numberw and Implicit differentiationw

The integralw is a generalization of multiplication.

For example: a unit mass dropped from point x2 to point x1 will release energy.
The usual equation is is a simple multiplication:
gravity \cdot (x_2 - x_1) = energy
But that equation cant be used if the strength of gravity is itself a function of x.
The strength of gravity at x1 would be different than it is at x2.
And in reality gravity really does depend on x (x is the distance from the center of the earth):
gravity(x) = 1/x^2 (See inverse-square laww.)
However, the corresponding Definite integralw is easily solved:
\int_{x_1}^{x_2} gravity(x) \cdot dx
The rules for solving it are surprisingly simple
\int_{x_1}^{x_2} f(x) \cdot dx \quad = \quad F(x_2)-F(x_1)

F(x) is called the indefinite integralw. (antiderivativew)

F(x) = \int f(x) \cdot dx

k and y are arbitrary constants:

\int k \cdot x^y \cdot dx \quad = \quad k \cdot \int x^y \cdot dx \quad = \quad k \cdot \frac{x^{y+1}}{y+1}

(Units (feet, mm...) behave exactly like constants.)

And most conveniently :

\int \bigg (f(x) + g(x) \bigg) \cdot dx = \int f(x) \cdot dx + \int g(x) \cdot dx
The integral of a function is equal to the area under the curve.
When the "curve" is a constant (in other words, k•x0) then the integral reduces to ordinary multiplication.

The derivativew is a generalization of division.

The derivative of the integral of f(x) is just f(x).


The derivative of a function at any point is equal to the slope of the function at that point.


The equation of the line tangent to a function at point a is

y(x) = f(a) + f'(a)(x-a)

The Lipschitz constantw of a function is a real number for which the absolute value of the slope of the function at every point is not greater than this real number.

The derivative of f(x) where f(x) = k•xy is

f'(x) = {df \over dx} = {d(k \cdot x^y) \over dx} \quad = \quad k \cdot {d(x^y) \over dx} \quad = \quad k \cdot y \cdot x^{y-1}
The derivative of a k \cdot x^0 is k \cdot 0 \cdot x^{-1}
The integral of x^{-1} is ln(x)[26]. See natural logw

Chain rulew for the derivative of a function of a function:

f(g(x))' = \frac{df}{dg} \cdot \frac{dg}{dx}

The Chain rule for a function of 2 functions:

f(g(x), h(x))' = \frac{\operatorname df}{\operatorname dx} = { \partial f \over \partial g}{\operatorname dg \over \operatorname dx} + {\partial f \over \partial h}{\operatorname dh \over \operatorname dx } (See "partial derivatives" below)

The Product rulew can be considered a special case of the chain rulew for several variables[27]

\frac{df}{dx} = {d (g(x) \cdot h(x)) \over dx} = \frac{\partial(g \cdot h)}{\partial g}\frac{dg}{dx}+\frac{\partial (g \cdot h)}{\partial h}\frac{dh}{dx} = \frac{dg}{dx} h + g \frac{dh}{dx}

Product rulew:

(g \cdot h)' = \frac{(g+dg) \cdot (h+dh) - g \cdot h}{dx} = g' \cdot h + g \cdot h' (because dh \cdot dg is negligible)
(g \cdot h \cdot j)' = g' \cdot h \cdot j + g \cdot h' \cdot j + g \cdot h \cdot j'

General Leibniz rule*:

(gh)^{(n)}=\sum_{k=0}^n {n \choose k} g^{(n-k)} h^{(k)}

By the chain rule:

\bigg(\frac{1}{h}\bigg)' = \frac{-1}{h^2} \cdot h'

Therefore the Quotient rulew:

\bigg( \frac{g(x)}{h(x)} \bigg)' = \bigg( g \cdot \frac{1}{h} \bigg)'  = g' \cdot \frac{1}{h} + g \cdot \frac{-h'}{h^2} = \frac{g' \cdot h  - g \cdot h'}{h^2}

There is a chain rule for integration but the inner function must have the form g=ax+c so that its derivative \frac{dg}{dx} = a and therefore dx=\frac{dg}{a}

\int f(g(x)) \cdot dx = \int f(g) \cdot \frac{dg}{a} = \frac{1}{a} \int f(g) \cdot dg

Actually the inner function can have the form g=ax^y+c so that its derivative \frac{dg}{dx} = a \cdot y \cdot x^{y-1} and therefore dx=\frac{dg}{a \cdot y \cdot x^{y-1}} provided that all factors involving x cancel out.

\int x^{y-1} \cdot f(g(x)) \cdot dx = \int {\color{red} x^{y-1}} \cdot f(g) \cdot \frac{dg}{a \cdot y \cdot {\color{red} x^{y-1}}} = \frac{1}{a \cdot y} \int f(g) \cdot dg

The product rule for integration is called Integration by partsw

g \cdot h' = (g \cdot h)' - g' \cdot h
\int g \cdot h' \cdot dx = g \cdot h - \int g' \cdot h \cdot dx

One can use partial fractionsw or even the Taylor seriesw to convert difficult integrals into a more manageable form.

\frac{f(x)}{(x-1)^2} = \frac{a_0(x-1)^0 + a_1(x-1)^1 + \dots + a_n(x-1)^n}{(x-1)^2}

The fundamental theorem of Calculus is:

F(x) - F(a) = \int_a^x\!f(t)\, dt \quad \text{and} \quad F'(x) = f(x)

The fundamental theorem of calculus is just the particular case of the Leibniz integral rule*:

\frac{d}{dx} \left (\int_{a(x)}^{b(x)}f(x,t)\,dt \right) = f\big(x,b(x)\big)\cdot \frac{d}{dx} b(x) - f\big(x,a(x)\big)\cdot \frac{d}{dx} a(x) + \int_{a(x)}^{b(x)}\frac{\partial}{\partial x} f(x,t) \,dt.

In calculus, a function f defined on a subset of the real numbers with real values is called monotonic* if and only if it is either entirely non-increasing, or entirely non-decreasing.[28]

A differential formw is a generalisation of the notion of a differentialw that is independent of the choice of coordinate system*. f(x,y) dx ∧ dy is a 2-form in 2 dimensions (an area element). The derivativew operation on an n-form is an n+1-form; this operation is known as the exterior derivativew. By the generalized Stokes' theoremw, the integral of a function over the boundary of a manifoldw is equal to the integral of its exterior derivative on the manifold itself.

Back to top

Taylor & Maclaurin series

If we know the value of a smooth functionw at x=0 (smooth means all its derivatives are continuousw) and we also know the value of all of its derivatives at x=0 then we can determine the value at any other point x by using the Maclaurin seriesw. ("!" means factorialw)

a_0 x^0 + a_1 x^1 + a_2 x^2 + a_3 x^3 \cdots \quad \text{where} \quad a_n = {f^{(n)}(0) \over n!}

The proof of this is actually quite simple. Plugging in a value of x=0 causes all terms but the first to become zero. So, assuming that such a function exists, a0 must be the value of the function at x=0. Simply differentiate both sides of the equation and repeat for the next term. And so on.

The Taylor seriesw generalizes this formula.
f(z)=\sum_{k=0}^\infty \alpha_k (z-z_0)^k
Riemann sqrt

Riemann surface for the function ƒ(z) = √z. For the imaginary part rotate 180°.

An analytic functionw is a function whose Taylor series converges for every z0 in its domainw; analytic functions are infinitely differentiablew.
Any vector g = (z0, α0, α1, ...) is a germ* if it represents a power series of an analytic functionw around z0 with some radius of convergence r > 0.
The set of germs \mathcal G is a Riemann surfacew.
Riemann surfaces are the objects on which multi-valued functions become single-valued.
A connected component* of \mathcal G (i.e., an equivalence class) is called a sheaf*.

We can easily determine the Maclaurin series expansion of the exponential functionw e^x (because it is equal to its own derivative).[26]

e^x = \sum_{n = 0}^{\infty} {x^n \over n!} = {x^0 \over 0!} + {x^1 \over 1!} + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots
The above holds true even if x is a matrix. See Matrix exponential*

And cos(x)w and sin(x)w (because cosine is the derivative of sine which is the derivative of -cosine)

\cos x = \frac{x^0}{0!} - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots

\sin x = \frac{x^1}{1!} - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots

It then follows that e^{ix}=\cos x+i\sin x=\operatorname{cis} x and therefore e^{i \pi}=-1 + i\cdot0 See Euler's formulaw

x is the angle in radians*.
This makes the equation for a circle in the complex plane, and by extension sine and cosine, extremely simple and easy to work with especially with regard to differentiation and integration.
\frac{d(e^{i \cdot k \cdot t})}{dt} = i \cdot k \cdot e^{i \cdot k \cdot t}
Differentiation and integration are replaced with multiplication and division. Calculus is replaced with algebra. Therefore any expression that can be represented as a sum of sine waves can be easily differentiated or integrated.

Back to top

Fourier Series

Fourier Series

The Maclaurin series cant be used for a discontinuous function like a square wave because it is not differentiable. (Distributions* make it possible to differentiate functions whose derivatives do not exist in the classical sense. See Generalized function*.)

But remarkably we can use the Fourier seriesw to expand it or any other periodic functionw into an infinite sum of sine waves each of which is fully differentiablew!

f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty \left[a_n\cos\left(nt\right)+b_n\sin\left(nt\right)\right]
a_n = \frac{2}{p}\int_{t_0}^{t_p} f(t)\cdot  \cos\left(\tfrac{2\pi nt}{p}\right)\ dt
b_n = \frac{2}{p}\int_{t_0}^{t_p} f(t)\cdot  \sin\left(\tfrac{2\pi nt}{p}\right)\ dt
Sine squared graph, or half of one minus the cosine of twice x

sin2(x) = 0.5*cos(0x) - 0.5*cos(2x)

The reason this works is because sine and cosine are orthogonal functions*.
\langle sin,cos\rangle=0.
That means that multiplying any 2 sine waves of frequency n and frequency m and integrating over one period will always equal zero unless n=m.
See the graph of sin2(x) to the right.
\sin mx \cdot \sin nx = \frac{\cos (m - n)x - \cos (m+n) x}{2}
See Amplitude_modulation*
And of course ∫ fn*(f1+f2+f3+...) = ∫ (fn*f1) + ∫ (fn*f2) + ∫ (fn*f3) +...
The complex form of the Fourier series uses complex exponentials instead of sine and cosine and uses both positive and negative frequencies (clockwise and counter clockwise) whose imaginary parts cancel.
The complex coefficients encode both amplitude and phase and are complex conjugates of each other.
F(\nu) = \mathcal{F}\{f\} = \int_{\mathbb{R}^n} f(x) e^{-2 \pi i x\cdot\nu} \, \mathrm{d}x
where the dot between x and ν indicates the inner productw of Rn.
A 2 dimensional Fourier series is used in video compression.
A discrete Fourier transform* can be computed very efficiently by a fast Fourier transform*.
In mathematical analysis, many generalizations of Fourier series have proven to be useful.
They are all special cases of decompositions over an orthonormal basis of an inner product space.[29]
Spherical harmonics* are a complete set of orthogonal functions on the sphere, and thus may be used to represent functions defined on the surface of a sphere, just as circular functions (sines and cosines) are used to represent functions on a circle via Fourier series.[30]
Spherical harmonics are basis functions* for SO(3). See Laplace seriesw.
Every continuous function in the function space can be represented as a linear combination* of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors.
Every quadratic polynomial can be written as a1+bt+ct2, that is, as a linear combination of the basis functions 1, t, and t2.

Back to top


Fourier transformsw generalize Fourier series to nonperiodic functions like a single pulse of a square wave.

The more localized in the time domain (the shorter the pulse) the more the Fourier transform is spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principlew.

The Fourier transform of the Dirac delta functionw gives G(f)=1

G(\omega)=\mathcal{F}\{f(t)\}=\int_{-\infty}^\infty f(t) e^{-i\omega t}dt
Laplace transformsw generalize Fourier transforms to complex frequency s=\sigma+i\omega.
Complex frequency includes a term corresponding to the amount of damping.
F(s)=\mathcal{L}\{f(t)\}=\int_0^\infty f(t) e^{-\sigma t}e^{-i \omega t}dt
\mathcal{L}\{ \delta(t-a) \} = e^{-as}, (assuming a > 0)
\mathcal{L}\{e^{at} \}= \frac{1}{s - a}
The inverse Laplace transformw is given by
f(t) = \mathcal{L}^{-1} \{F\} =  \frac{1}{2\pi i}\lim_{T\to\infty}\int_{\gamma-iT}^{\gamma+iT}F(s)e^{st}\,ds,
where the integration is done along the vertical line Re(s) = γ in the complex planew such that γ is greater than the real part of all singularities* of F(s) and F(s) is bounded on the line, for example if contour path is in the region of convergence*.
If all singularities are in the left half-plane, or F(s) is an entire function* , then γ can be set to zero and the above inverse integral formula becomes identical to the inverse Fourier transform*.[31]
Integral transformsw generalize Fourier transforms to other kernalsw (besides sinew and cosinew)
Cauchy kernel =\frac{1}{\zeta-x} \quad \text{or} \quad \frac{1}{2\pi i} \cdot \frac{1}{\zeta-x}
Hilbert kernel = cot\frac{\theta-t}{2}
Poisson Kernel:
For the ball of radius r, B_{r}, in Rn, the Poisson kernel takes the form:
P(x,\zeta) = \frac{r^2-|x|^2}{r} \cdot \frac{1}{|\zeta-x|^n} \cdot \frac{1}{\omega_{n}}
where x\in B_{r}, \zeta\in S (the surface of B_{r}), and \omega _{n} is the surface area of the unit n-sphere*.
unit disk (r=1) in the complex plane:[32]
K(x,\phi) = \frac{1^2-|x|^2}{1} \cdot \frac{1}{|e^{i\phi}-x|^2}\cdot \frac{1}{2\pi}
Dirichlet kernel

e^{ikx}=1+2\sum_{k=1}^n\cos(kx)=\frac{\sin\left(\left(n + \frac{1}{2}\right) x \right)}{\sin(\frac{x}{2})} \approx 2\pi\delta(x)

The convolution* theorem states that[33]

\mathcal{F}\{f*g\} = \mathcal{F}\{f\} \cdot \mathcal{F}\{g\}

where \cdot denotes point-wise multiplication. It also works the other way around:

\mathcal{F}\{f \cdot g\}= \mathcal{F}\{f\}*\mathcal{F}\{g\}

By applying the inverse Fourier transform \mathcal{F}^{-1}, we can write:

f*g= \mathcal{F}^{-1}\big\{\mathcal{F}\{f\}\cdot\mathcal{F}\{g\}\big\}


f \cdot g= \mathcal{F}^{-1}\big\{\mathcal{F}\{f\}*\mathcal{F}\{g\}\big\}

This theorem also holds for the Laplace transformw.

The Hilbert transform* is a multiplier operator*. The multiplier of H is σH(ω) = −i sgn(ω) where sgn is the signum function*. Therefore:

\mathcal{F}(H(u))(\omega) = (-i\,\operatorname{sgn}(\omega)) \cdot \mathcal{F}(u)(\omega)

where \mathcal{F} denotes the Fourier transformw.

Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of  \mathcal{F}.

By Euler's formulaw,

\sigma_H(\omega) = \begin{cases}

   i = e^{+\frac{i\pi}{2}}, & \text{for } \omega < 0\\

                         0, & \text{for } \omega = 0\\

  -i = e^{-\frac{i\pi}{2}}, & \text{for } \omega > 0


Therefore, H(u)(t) has the effect of shifting the phase of the negative frequency* components of u(t) by +90° (π/2 radians) and the phase of the positive frequency components by −90°.

And i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation.

In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI).

At any given moment, the output is an accumulated effect of all the prior values of the input function

Back to top

Differential equations

See also: Variation of parameters*
Simple Harmonic Motion Orbit

Simple harmonic motion shown both in real space and phase space*.

Simple harmonic motion* of a mass on a spring is a second-order linear ordinary differential equationw.

 Force = mass*acc = m\frac{\mathrm{d}^2 x}{\mathrm{d}t^2} = -kx,

where m is the inertial mass, x is its displacement from the equilibrium, and k is the spring constant.

Solving for x produces

 x(t) = A\cos\left(\omega t - \varphi\right),

A is the amplitude (maximum displacement from the equilibrium position),  \omega = 2\pi f = \sqrt{k/m} is the angular frequencyw, and φ is the phase.

Energy passes back and forth between the potential energy in the spring and the kinetic energy of the mass.

The important thing to note here is that the frequency of the oscillation depends only on the mass and the stiffness of the spring and is totally independent of the amplitude.

That is the defining characteristic of resonance.

RLC series circuit v1

RLC series circuit

Kirchhoff's voltage law* states that the sum of the emfs in any closed loop of any electronic circuit is equal to the sum of the voltage drops* in that loop.[34]

V(t) = V_R + V_L + V_C

V is the voltage, R is the resistance, L is the inductance, C is the capacitance.

V(t) = RI(t) + L \frac{dI(t)}{dt} + \frac{1}{C} \int_{0}^t I(\tau)\, d\tau

I = dQ/dt is the current.

It makes no difference whether the current is a small number of charges moving very fast or a large number of charges moving slowly.

In reality the latter is the case*.

Oscillation amortie

Damping oscillation* is a typical transient response*

If V(t)=0 then the only solution to the equation is the transient response which is a rapidly decaying sine wave with the same frequency as the resonant frequency of the circuit.

Like a mass (inductance) on a spring (capacitance) the circuit will resonate at one frequency.
Energy passes back and forth between the capacitor and the inductor with some loss as it passes through the resistor.

If V(t)=sin(t) from -∞ to +∞ then the only solution is a sine wave with the same frequency as V(t) but with a different amplitude and phase.

If V(t) is zero until t=0 and then equals sin(t) then I(t) will be zero until t=0 after which it will consist of the steady state response plus a transient response.

From Wikipedia:Characteristic equation (calculus):

Starting with a linear homogeneous differential equation with constant coefficients a_{n}, a_{n-1}, \ldots , a_{1}, a_{0},

a_{n}y^{(n)} + a_{n-1}y^{(n-1)} + \cdots + a_{1}y^\prime + a_{0}y = 0

it can be seen that if y(x) = e^{rx} \, , each term would be a constant multiple of  e^{rx} \, . This results from the fact that the derivative of the exponential functionw  e^{rx} \, is a multiple of itself. Therefore, y' = re^{rx} \, , y'' = r^{2}e^{rx} \, , and y^{(n)} = r^{n}e^{rx} \, are all multiples. This suggests that certain values of  r \, will allow multiples of  e^{rx} \, to sum to zero, thus solving the homogeneous differential equation.[35] In order to solve for  r \, , one can substitute y = e^{rx} \, and its derivatives into the differential equation to get

a_{n}r^{n}e^{rx} + a_{n-1}r^{n-1}e^{rx} + \cdots + a_{1}re^{rx} + a_{0}e^{rx} = 0

Since  e^{rx} \, can never equate to zero, it can be divided out, giving the characteristic equation

a_{n}r^{n} + a_{n-1}r^{n-1} + \cdots + a_{1}r + a_{0} = 0

By solving for the roots,  r \, , in this characteristic equation, one can find the general solution to the differential equation.[36][37] For example, if  r \, is found to equal to 3, then the general solution will be y(x) = ce^{3x} \, , where  c \, is an arbitrary constantw.

Back to top

Partial derivatives

Partial derivativesw and multiple integralsw generalize derivatives and integrals to multiple dimensions.

The partial derivative with respect to one variable \frac{\part f(x,y)}{\part x} is found by simply treating all other variables as though they were constants.

Multiple integrals are found the same way.

Let f(x, y, z) be a scalar function (for example electric potential energy or temperature).

A 2 dimensional example of a scalar function would be an elevation map.
(Contour lines of an elevation map are an example of a level set*.)

The Gradientw of f(x, y, z) is a vector field whose value at each point is a vector (technically its a covectorw because it has units of distance−1) that points "downhill" with a magnitude equal to the slopew of the function at that point.

You can think of it as how much the function changes per unit distance.

For static (unchanging) fields the Gradient of the electric potential is the electric fieldw itself.

The gradient of temperature gives heat flow.

\operatorname{grad}(f) = \nabla f = \frac{\partial f}{\partial x} \mathbf{i} +

\frac{\partial f}{\partial y}  \mathbf{j} +

\frac{\partial f}{\partial z} \mathbf{k} = \mathbf{F}

The Divergencew of a vector field is a scalar.

The divergence of the electric field is non-zero wherever there is electric chargew and zero everywhere else.

Field linesw begin and end at charges because the charges create the electric field.

\operatorname{div}\,\mathbf{F} = {\color{red} \nabla\cdot\mathbf{F} }

 = \left(

\frac{\partial}{\partial x},

\frac{\partial}{\partial y},

\frac{\partial}{\partial z}


\cdot (F_x,F_y,F_z)

 = \frac{\partial F_x}{\partial x}

+\frac{\partial F_y}{\partial y}

+\frac{\partial F_z}{\partial z}.

The Laplacianw is the divergence of the gradient of a function:

\Delta f = \nabla^2 f = (\nabla \cdot \nabla) f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} + \frac{\partial^2 f}{\partial z^2}.
elliptic operators* generalize the Laplacian.

The Curlw of a vector field describes how much the vector field is twisted.

(The field may even go in circles.)

The curl at a certain point of a magnetic fieldw is the currentw vector at that point because current creates the magnetic fieldw.

In 3 dimensions the dual of the current vector is a bivector.

\text{curl} (\mathbf{F}) = {\color{blue} \nabla \times \mathbf{F} } = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\

{\frac{\partial}{\partial x}} & {\frac{\partial}{\partial y}} & {\frac{\partial}{\partial z}} \\

  F_x & F_y & F_z \end{vmatrix}
\text{curl}( \mathbf{F}) = \left(\frac{\partial F_z}{\partial y}  - \frac{\partial F_y}{\partial z}\right) \mathbf{i} + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\right) \mathbf{j} + \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right) \mathbf{k}
Curl and divergence

In 2 dimensions this reduces to a single scalar

\text{curl}( \mathbf{F}) = \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right)

The curl of the gradient of any scalar field is always zero.

The curl of a vector field in 4 dimensions would no longer be a vector. It would be a bivector. However the curl of a bivector field in 4 dimensions would still be a vector.

See also: differential forms*.

The Gradientw of a vector field is a tensor field. Each row is the gradient of the corresponding scalar function:

\nabla \mathbf{F} =


\frac{\partial}{\partial x} \mathbf{e}_x,

\frac{\partial}{\partial y} \mathbf{e}_y,

\frac{\partial}{\partial z} \mathbf{e}_z



f_x \mathbf{e}_x \\

f_y \mathbf{e}_y \\

f_z \mathbf{e}_z

\end{bmatrix} =


{\color{red}  \frac{\partial f_x}{\partial x} \mathbf{e}_{xx} } &

{\color{blue} \frac{\partial f_x}{\partial y} \mathbf{e}_{xy} } &

{\color{blue} \frac{\partial f_x}{\partial z} \mathbf{e}_{xz} } \\

{\color{blue} \frac{\partial f_y}{\partial x} \mathbf{e}_{yx} } &

{\color{red}  \frac{\partial f_y}{\partial y} \mathbf{e}_{yy} } &

{\color{blue} \frac{\partial f_y}{\partial z} \mathbf{e}_{yz} } \\

{\color{blue} \frac{\partial f_z}{\partial x} \mathbf{e}_{zx} } &

{\color{blue} \frac{\partial f_z}{\partial y} \mathbf{e}_{zy} } &

{\color{red}  \frac{\partial f_z}{\partial z} \mathbf{e}_{zz} }

Remember that \mathbf{e}_{xy} = - \mathbf{e}_{yx} because rotation from y to x is the negative of rotation from x to y.

Partial differential equations can be classified as parabolic*, hyperbolic* and elliptic*.

The total derivativew of f(x(t), y(t)) with respect to t is[38]

\frac{\operatorname df}{\operatorname dt} = { \partial f \over \partial x}{\operatorname dx \over \operatorname dt} + {\partial f \over \partial y}{\operatorname dy \over \operatorname dt }

And the differentialw is

\operatorname df = { \partial f \over \partial x}\operatorname dx + {\partial f \over \partial y} \operatorname dy .

The line integralw along a 2-D vector field is:

\int (V_1 \cdot dx + V_2 \cdot dy) = \int_a^b \bigg [V_1(x(t),y(t)) \frac{dx}{dt} + V_2(x(t),y(t)) \frac{dy}{dt} \bigg ] dt
\int (V_1 \cdot dx + V_2 \cdot dy) = \iint \bigg ( \frac{\partial V_2}{\partial x} - \frac{\partial V_1}{\partial y} \bigg ) \cdot dx \cdot dy = \iint \bigg (  {\color{blue} \nabla \times \mathbf{V} } \bigg ) \cdot dx \cdot dy
Radial vector field

Divergence is zero everywhere except at the origin where a charge is located. A line integral around any of the red circles will give the same answer because all the circles contain the same amount of charge.

Green's theoremw states that if you want to know how many field lines cross (or run parallel to) the boundary of a given region then you can either perform a line integral or you can simply count the number of charges (or the amount of current) within that region. See Divergence theoremw

\oiint{\scriptstyle S } \vec{F} \cdot \ \mathrm{d} \vec{s} = \iiint_D \nabla \cdot \vec{F} \,\mathrm{d}V = \iiint_D \nabla^2 f \,\mathrm{d}V

In 2 dimensions this is

\oint_S \vec{F} \cdot \vec{n} \ \mathrm{d} s = \iint_D \nabla \cdot \vec{F} \ \mathrm{d} A= \iint_D \nabla^2 f \ \mathrm{d} A

Green's theorem is perfectly obvious when dealing with vector fields but is much less obvious when applied to complex valued functions in the complex plane.

Back to top

In the complex plane

External link:

The formula for the derivative of a complex function f at a point z0 is the same as for a real function:

f'(z_0) = \lim_{z \to z_0} {f(z) - f(z_0) \over z - z_0 }.

Every complex function can be written in the form f(z)=f(x+iy)=f_x(x,y)+i f_y(x,y)

Because the complex plane is two dimensional, z can approach z0 from an infinite number of different directions.

However, if within a certain region, the function f is holomorphicw (that is, complex differentiablew) then, within that region, it will only have a single derivative whose value does not depend on the direction in which z approaches z0 despite the fact that fx and fy each have 2 partial derivatives. One in the x and one in the y direction..

{df \over dz} \quad = \quad {\part f_x \over \part x} + i {\part f_y \over \part x} \quad = \quad {\part f_y \over \part y} - i {\part f_x \over \part y} \quad = \quad {\part f_x \over \part x} - i {\part f_x \over \part y} \quad = \quad {\part f_y \over \part y} + i {\part f_y \over \part x}
{d^2f \over dz^2} \quad = \quad {\part^2 f_x \over \part x^2} + i {\part^2 f_y \over \part x^2} \quad = \quad {\part^2 f_y \over \part y \part x} - i {\part^2 f_x \over \part y \part x}

This is only possible if the Cauchy–Riemann conditionsw are true.

\frac{\part f_x}{\part x}=\frac{\part f_y}{\part y}\ ,\ \quad \frac{\part f_y}{\part x}=-\frac{\part f_x}{\part y}

An entire function*, also called an integral function, is a complex-valued function that is holomorphic at all finite points over the whole complex plane.

As with real valued functions, a line integral of a holomorphic function depends only on the starting point and the end point and is totally independant of the path taken.

\int f(z) \cdot dz = \int (f_x \cdot dx - f_y \cdot dy) + i \int (f_y \cdot dx + f_x \cdot dy)
\int f(z) \cdot dz = F(z) = \int_0^t f(z(t)) \cdot \frac{dz}{dt} \cdot dt
\int_a^b f(z) \cdot dz = F(b) - F(a)

The starting point and the end point for any loop are the same. This, of course, implies Cauchy's integral theoremw for any holomorphic function f:

\oint f(z) \, dz =


\left( \frac{- \partial f_x}{\partial y} + \frac{- \partial f_y}{\partial x}  \right) dx \, dy +



\left( \frac{\partial f_x}{\partial x} + \frac{- \partial f_y}{\partial y} \right)  \, dx \, dy = 0

\oint f(z) \, dz =

\iint \left(

{\color{blue} \nabla \times \bar{f}} +

 i {\color{red} \nabla \cdot \bar{f}}

 \right) \, dx \, dy = 0

Therefore curl and divergence must both be zero for a function to be holomorphic.

Green's theoremw for functions (not necessarily holomorphic) in the complex plane:

\oint f(z) \, dz =

2i \iint \left( df/d\bar{z} \right) \, dx \, dy =

i \iint \left( \nabla f \right) \, dx \, dy =

i \iint \left( 1 {\partial f \over \partial x} +

i {\partial f \over \partial y} \right) \, dx \, dy

Computing the residuew of a monomial[39]


\oint_C (z-z_0)^n dz = \int_0^{2\pi} e^{in \theta} \cdot i e^{i \theta} d \theta = i \int_0^{2\pi} e^{i (n+1) \theta} d\theta

= \begin{cases}

2\pi i & \text{if } n = -1 \\

0 & \text{otherwise}


where C is the circle with radius 1 therefore z \to e^{i\theta} and dz \to d(e^{i\theta}) = ie^{i\theta}d\theta
\oint_{C_r}\frac{f(z)}{z-z_0}dz = \oint_{C_r}\frac{f(z_0)}{z-z_0}dz + \oint_{C_r}\frac{f(z)-f(z_0)}{z-z_0}dz = f(z_0)2\pi i + 0

The last term in the equation above equals zero when r=0. Since its value is independent of r it must therefore equal zero for all values of r.

\bigg | \int_\Gamma f(z) \cdot dz \bigg | \leq Max(|f(z)|) \cdot length(\Gamma)

Cauchy's integral formulaw states that the value of a holomorphic function within a disc is determined entirely by the values on the boundary of the disc.

Divergence can be nonzero outside the disc.

Cauchy's integral formula can be generalized to more than two dimensions.

Laurent series

f^{(0)}(z_0)=\dfrac{1}{2\pi i}\oint_\gamma f(z)\frac{1}{z-z_0}dz

Which gives:

f'(z_0)=\dfrac{1}{2\pi i}\oint_\gamma f(z)\frac{1}{(z-z_0)^2}dz

f''(z_0)=\dfrac{2}{2\pi i}\oint_\gamma f(z)\frac{1}{(z-z_0)^3}dz
f^{(n)}(z_0) = \frac{n!}{2\pi i} \oint_\gamma f(z)\frac{1}{(z-z_0)^{n+1}}\, dz
Note that n does not have to be an integer. See Fractional calculus*.

The Taylor series becomes:

f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n \quad \text{where} \quad a_n=\frac{1}{2\pi i} \oint_\gamma \frac{f(z)\,\mathrm{d}z}{(z-z_0)^{n+1}} = \frac{f^{(n)}(z_0)}{n!}

The Laurent series* for a complex function f(z) about a point z0 is given by:

f(z)=\sum_{n=-\infty}^\infty a_n(z-z_0)^n \quad \text{where} \quad a_n=\frac{1}{2\pi i} \oint_\gamma \frac{f(z)\,\mathrm{d}z}{(z-z_0)^{n+1}} = \frac{f^{(n)}(z_0)}{n!}

The positive subscripts correspond to a line integral around the outer part of the annulus and the negative subscripts correspond to a line integral around the inner part of the annulus. In reality it makes no difference where the line integral is so both line integrals can be moved until they correspond to the same contour gamma. See also: Z-transform*

The function \frac{1}{(z-1)(z-2)} has poles at z=1 and z=2. It therefore has 3 different Laurent series centered on the origin (z0 = 0):

For 0 < |z| < 1 the Laurent series has only positive subscripts and is the Taylor series.
For 1 < |z| < 2 the Laurent series has positive and negative subscripts.
For 2 < |z| the Laurent series has only negative subscripts.

Cauchy formula for repeated integration*:

f^{(-n)}(a) = \frac{1}{(n-1)!} \int_0^a f(z) \left(a-z\right)^{n-1} \,\mathrm{d}z

For every holomorphic functionw f(z)=f(x+iy)=f_x(x,y)+i f_y(x,y) both fx and fy are harmonic functionsw.

Any two-dimensional harmonic function is the real part of a complex analytic functionw.

See complex analysisw.[40]

fy is the harmonic conjugate* of fx.
Geometrically fx and fy are related as having orthogonal trajectories, away from the zeroes of the underlying holomorphic function; the contours on which fx and fy are constant (equipotentials* and streamlines*) cross at right angles.
In this regard, fx+ify would be the complex potential, where fx is the potential function* and fy is the stream function*.[41]
fx and fy are both solutions of Laplace's equationw  \nabla^2 f = 0 so divergence of the gradient is zero
Legendre functions* are solutions to Legendre's differential equation.
This ordinary differential equation is frequently encountered when solving Laplace's equation (and related partial differential equations) in spherical coordinates.
A harmonic functionw is a scalar potential function therefore the curl of the gradient will also be zero.
See Potential theory

Complex divergence

Complex curl

Harmonic functions are real analogues to holomorphic functions.
All harmonic functions are analytic, i.e. they can be locally expressed as power series.
This is a general fact about elliptic operators*, of which the Laplacian is a major example.
The value of a harmonic function at any point inside a disk is a weighted average* of the value of the function on the boundary of the disk.
P[u](x) = \int_S u(\zeta)P(x,\zeta)d\sigma(\zeta).\,
The Poisson kernel* gives different weight to different points on the boundary except when x=0.
The value at the center of the disk (x=0) equals the average of the equally weighted values on the boundary.
All locally integrable functions satisfying the mean-value property are both infinitely differentiable and harmonic.
The kernel itself appears to simply be 1/r^n shifted to the point x and multiplied by different constants.
For a circle (K = Poisson Kernel):


\oint_0^{2\pi} f(Re^{i\theta}) K(R,r,\theta-\phi) d\theta

\frac{d(a(x,y)+ib(x,y))}{d(x+iy)} = \frac{da+idb}{dx+idy} = \frac{(da+idb)(dx-idy)}{dx^2+dy^2} = \frac{dadx+dbdy+i(dbdx-dady)}{dx^2+dy^2}

\frac{d(a(x,y)+ib(x,y))}{d(x+iy)} = \frac{da}{dx} +\frac{db}{dy} +i \bigg(\frac{db}{dx}- \frac{da}{dy} \bigg) = 
{\color{red} \nabla \cdot f}
+ i {\color{blue} \nabla \times f}

Back to top

Geometric calculus

See also: Geometric_algebra#Geometric_calculus*

From Wikipedia:Geometric calculusw:

Geometric calculus extends the geometric algebra to include differentiation and integration. The formalism is powerful and can be shown to encompass other mathematical theories including differential geometry and differential forms.

With a geometric algebra given, let a and b be vectors* and let F(a) be a multivectorw-valued function. The directional derivativew of F(a) along b is defined as

\nabla_b F(a) = \lim_{\epsilon \rightarrow 0}{\frac{F(a + \epsilon b) - F(a)}{\epsilon}}

provided that the limit exists, where the limit is taken for scalar ε. This is similar to the usual definition of a directional derivative but extends it to functions that are not necessarily scalar-valued.

Next, choose a set of basis vectorws \{e_i\} and consider the operators, noted (\partial_i), that perform directional derivatives in the directions of (e_i):

\partial_i : F \mapsto (x\mapsto \nabla_{e_i} F(x))

Then, using the Einstein summation notation*, consider the operator :


which means:

F \mapsto e^i\partial_i F

or, more verbosely:

F \mapsto (x\mapsto e^i\nabla_{e_i} F(x))

It can be shown that this operator is independent of the choice of frame, and can thus be used to define the geometric derivative:

\nabla = e^i\partial_i

This is similar to the usual definition of the gradientw, but it, too, extends to functions that are not necessarily scalar-valued.

It can be shown that the directional derivative is linear regarding its direction, that is:

\nabla_{\alpha a + \beta b} = \alpha\nabla_a + \beta\nabla_b

From this follows that the directional derivative is the inner product of its direction by the geometric derivative. All needs to be observed is that the direction a can be written a = (a\cdot e^i) e_i, so that:

\nabla_a = \nabla_{(a\cdot e^i)e_i} = (a\cdot e^i)\nabla_{e_i} = a\cdot(e^i\nabla_{e^i}) = a\cdot \nabla

For this reason, \nabla_a F(x) is often noted a\cdot \nabla F(x).

The standard order of operationsw for the geometric derivative is that it acts only on the function closest to its immediate right. Given two functions F and G, then for example we have

\nabla FG = (\nabla F)G.

Although the partial derivative exhibits a product rulew, the geometric derivative only partially inherits this property. Consider two functions F and G:

\begin{align}\nabla(FG) &= e^i\partial_i(FG) \\
&= e^i((\partial_iF)G+F(\partial_iG)) \\
&= e^i(\partial_iF)G+e^iF(\partial_iG) \end{align}

Since the geometric product is not commutativew with e^iF \ne Fe^i in general, we cannot proceed further without new notation. A solution is to adopt the overdot* notation, in which the scope of a geometric derivative with an overdot is the multivector-valued function sharing the same overdot. In this case, if we define


then the product rule for the geometric derivative is

\nabla(FG) = \nabla FG+\dot{\nabla}F\dot{G}

Let F be an r-grade multivector. Then we can define an additional pair of operators, the interior and exterior derivatives,

\nabla \cdot F = \langle \nabla F \rangle_{r-1} = e^i \cdot \partial_i F
\nabla \wedge F = \langle \nabla F \rangle_{r+1} = e^i \wedge \partial_i F.

In particular, if F is grade 1 (vector-valued function), then we can write

\nabla F = \nabla \cdot F + \nabla \wedge F

and identify the divergencew and curlw as

\nabla \cdot F = \operatorname{div} F
\nabla \wedge F = I \, \operatorname{curl} F.

Note, however, that these two operators are considerably weaker than the geometric derivative counterpart for several reasons. Neither the interior derivative operator nor the exterior derivative operator is invertible*.

The reason for defining the geometric derivative and integral as above is that they allow a strong generalization of Stokes' theoremw. Let \mathsf{L}(A;x) be a multivector-valued function of r-grade input A and general position x, linear in its first argument. Then the fundamental theorem of geometric calculus relates the integral of a derivative over the volume V to the integral over its boundary:

\int_V \dot{\mathsf{L}} \left(\dot{\nabla} dX;x \right) = \oint_{\partial V} \mathsf{L} (dS;x)

As an example, let \mathsf{L}(A;x)=\langle F(x) A I^{-1} \rangle for a vector-valued function F(x) and a (n-1)-grade multivector A. We find that

\begin{align}\int_V \dot{\mathsf{L}} \left(\dot{\nabla} dX;x \right) &= \int_V \langle\dot{F}(x)\dot{\nabla} dX I^{-1} \rangle \\
&= \int_V \langle\dot{F}(x)\dot{\nabla} |dX| \rangle \\
&= \int_V \nabla \cdot F(x) |dX| . \end{align}

and likewise

\begin{align}\oint_{\partial V} \mathsf{L} (dS;x) &= \oint_{\partial V} \langle F(x) dS I^{-1} \rangle \\
&= \oint_{\partial V} \langle F(x) \hat{n} |dS| \rangle \\
&= \oint_{\partial V} F(x) \cdot \hat{n} |dS| \end{align}

Thus we recover the divergence theoremw,

\int_V \nabla \cdot F(x) |dX| = \oint_{\partial V} F(x) \cdot \hat{n} |dS|.

Back to top

Calculus of variations

Calculus of variations*, Functional*, Functional analysis*, Higher-order function*

Whereas calculus is concerned with infinitesimal changes of variables, calculus of variations is concerned with infinitesimal changes of the underlying function itself.

Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals.

A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action.[42]

Back to top

Discrete mathematics

Groups and rings

Main articles: Algebraic structurew, Abstract algebraw, and group theory*

Addition and multiplication can be generalized in so many ways that mathematicians have created a whole system of categories just to organize them.


Any straight line through the origin forms a group. Adding any 2 points on the line results in a 3rd point that is also on the line.

A magma is a set with a single closed binary operation (usually, but not always*, addition).

a + b = c

A semigroup is a magma where the addition is associative. See also Semigroupoid*

a + (b + c) = (a + b) + c

A monoid is a semigroup with an additive identity element.

a + 0 = a

A group is a monoid with additive inverse elements.

a + (-a) = 0

An abelian group is a group where the addition is commutative.

a + b = b + a

A pseudo-ring* is an abelian group that also has a second closed, associative, binary operation (usually, but not always, multiplication).

a * (b * c) = (a * b) * c
And these two operations satisfy a distribution law.
a(b + c) = ab + ac

A ring is a pseudo-ring that has a multiplicative identity

a * 1 = a

A commutative ring* is a ring where multiplication commutes, (e.g. integers)

a * b = b * a

A field is a commutative ring where every element has a multiplicative inverse (and thus there is a multiplicative identity),

a * (1/a) = 1
The existence of a multiplicative inverse for every nonzero element automatically implies that there are no zero divisors* in a field
if ab=0 for some a≠0, then we must have b=0 (we call this having no zero-divisors).

The characteristic* of ring R, denoted char(R), is the number of times one must add the multiplicative identity to get the additive identity.

The center of a group* G consists of all those elements x in G such that xg = gx for all g in G. This is a normal subgroup* of G.[43] See also: Centralizer and normalizer*.

All non-zero nilpotent* elements are zero divisors*.

The square matrixw A = \begin{pmatrix}
    0 & 1 & 0\\
    0 & 0 & 1\\
    0 & 0 & 0
is nilpotent

Back to top

Set theory

"See also: Naive set theory*, Zermelo–Fraenkel set theory*, Set theoryw, Set notation*, Set-builder notation*, Setw, Algebra of sets*, Field of sets*, and Sigma-algebra*

\varnothing is the empty set (the additive identity)

\mathbf{U} is the universe of all elements (the multiplicative identity)

a \in A means that a is a elementw (or member) of set A. In other words a is in A.

\{ x \in \mathbf{A} : x \notin \mathbb{R}  \} means the set of all x's that are members of the set A such that x is not a member of the realsw. Could also be written \{ \mathbf{A} - \mathbb{R}  \}

A setw does not allow multiple instances of an element. \{1,1,2\} = \{1,2\}

A multisetw does allow multiple instances of an element. \{1,1,2\} \neq \{1,2\}

A set can contain other sets. \{1,\{2\},3\} \neq \{1,2,3\}

A \subset B means that A is a proper subsetw of B

A \subseteq A means that a is a subsetw of itself. But a set is not a proper subsetw of itself.

A \cup B is the Unionw of the sets A and B. In other words, \{A+B\}


A \cap B is the Intersectionw of the sets A and B. In other words, \{A \cdot B\} All a's in B.

Associative: A \cdot \{B \cdot C\} = \{A \cdot B\} \cdot C
Distributive: A \cdot \{B + C\}=\{A \cdot B\} + \{A \cdot C\}
Commutative: \{A \cdot B\} =\{B \cdot A\}

A \setminus B is the Set differencew of A and B. In other words, \{A - A \cdot B\}

\overline{A} or A^c = \{U - A\} is the complementw of A.

A \bigtriangleup B or A \ominus B is the Anti-intersectionw of sets A and B which is the set of all objects that are a members of either A or B but not in both.

A \bigtriangleup B = (A + B) - (A \cdot B) = (A - A \cdot B) + (B - A \cdot B)

A \times B is the Cartesian productw of A and B which is the set whose members are all possible ordered pairsw (a, b) where a is a member of A and b is a member of B.

The Power setw of a set A is the set whose members are all of the possible subsets of A.

A cover* of a set X is a collection of sets whose union contains X as a subset.[44]

A subset A of a topological space X is called dense* (in X) if every point x in X either belongs to A or is arbitrarily "close" to a member of A.

A subset A of X is meagre* if it can be expressed as the union of countably many nowhere dense subsets of X.

Disjoint union* of sets A_0 = {1, 2, 3} and A_1 = {1, 2, 3} can be computed by finding:


    A^*_0 & = \{(1, 0), (2, 0), (3, 0)\} \\

    A^*_1 & = \{(1, 1), (2, 1), (3, 1)\}



    A_0 \sqcup A_1 = A^*_0 \cup A^*_1 = \{(1, 0), (2, 0), (3, 0), (1, 1), (2, 1), (3, 1)\}

Let H be the subgroup of the integers (mZ, +) = ({..., −2m, −m, 0, m, 2m, ...}, +) where m is a positive integer.

Then the cosets* of H are the mZ + a = {..., −2m+a, −m+a, a, m+a, 2m+a, ...}.
There are no more than m cosets, because mZ + m = m(Z + 1) = mZ.
The coset (mZ + a, +) is the congruence classw of a modulo m.[45]
Cosets are not usually themselves subgroups of G, only subsets.

\exists means "there exists at least one"

\exists! means "there exists one and only one"

\forall means "for all"

\land means "and" (not to be confused with wedge productw)

\lor means "or" (not to be confused with antiwedge productw)

Back to top


\vert A \vert is the cardinalityw of A which is the number of elements in A. See measurew.

P(A) = {\vert A \vert \over \vert U \vert} is the unconditional probabilityw that A will happen.

P(A \mid B) = {\vert A \cdot B \vert \over \vert B \vert} is the conditional probabilityw that A will happen given that B has happened.

P(A + B) = P(A) + P(B) - P(A \cdot B) means that the probability that A or B will happen is the probability of A plus the probability of B minus the probability that both A and B will happen.

P(A \cdot B) = P(A \cdot B \mid B)P(B) = P(A \cdot B \mid A)P(A) means that the probability that A and B will happen is the probability of "A and B given B" times the probability of B.

P(A \cdot B \mid B) = \frac{P(A \cdot B \mid A) \, P(A)}{P(B)}, is Bayes' theorem*

From Wikipedia:Base rate fallacy:

In a city of 1 million inhabitants let there be 100 terrorists and 999,900 non-terrorists. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. 99% of the time it behaves correctly. 1% of the time it behaves incorrectly, ringing when it should not and failing to ring when it should. Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T | B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the 'base rate fallacy' would infer that there is a 99% chance that the detected person is a terrorist. But that is not even close. For every 1 million faces scanned it will see 100 terrorists and will correctly ring 99 times. But it will also ring falsely 9,999 times. So the true probability is only 99/(9,999+99) or about 1%.

permutationw relates to the act of arranging all the members of a setw into some sequencew or order*.

The number of permutations of n distinct objects is n!w.[46]

A derangement is a permutation of the elements of a set, such that no element appears in its original position.

In other words, derangement is a permutation that has no fixed points*.

The number of derangements* of a set of size n, usually written !n*, is called the "derangement number" or "de Montmort number".[47]

The rencontres numbers* are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements.[48]

a combinationw is a selection of items from a collection, such that the order of selection does not matter.

For example, given three numbers, say 1, 2, and 3, there are three ways to choose two from this set of three: 12, 13, and 23.

More formally, a k-combination of a setw S is a subset of k distinct elements of S.

If the set has n elements, the number of k-combinations is equal to the binomial coefficientw

\binom nk = \textstyle\frac{n!}{k!(n-k)!}. Pronounced n choose k. The set of all k-combinations of a set S is often denoted by \textstyle\binom Sk.

The central limit theorem (CLT) establishes that, in most situations, when independent random variables* are added, their properly normalized sum tends toward a normal distributionw (informally a "bell curve") even if the original variables themselves are not normally distributed.[49]

Standard deviation diagram

A plot of normal distributionw (or bell-shaped curve) where each band has a width of 1 standard deviation – See also: 68–95–99.7 rule*

In statisticsw, the standard deviation (SD, also represented by the Greek letter sigma σw or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion* of a set of data values.[50]

A low standard deviation indicates that the data points tend to be close to the meanw (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values.[51]

The hypergeometric distribution* is a discrete probability distribution that describes the probability of k successes (random draws for which the object drawn has a specified feature) in n draws, without replacement, from a finite population of size N that contains exactly K objects with that feature, wherein each draw is either a success or a failure.

In contrast, the binomial distribution* describes the probability of k successes in n draws with replacement.[52]

See also Dirichlet distribution* and Rice distribution*

Back to top


See also: Higher category theoryw and Multivalued function (misnomer)*

Every functionw has exactly one output for every input.

If the function f(x) is invertible* then its inverse functionw f−1(x) has exactly one output for every input.

If it isn't invertible then it doesn't have an inverse function.

f(x)=x/(x-1) is an involution* which is a function that is its own inverse function. f(f(x))=x
Injectionw Invertible function

A morphismw is exactly the same as a function but in Category theoryw every morphism has an inverse which is allowed to have more than one value or no value at all.

Categories* consist of:

Objects (usually Setsw)
Morphismsw (usually mapsw) possessing:
one source object (domain)
one target object (codomain)
Commutative diagram for morphism

a morphism is represented by an arrow:

f(x)=y is written f : x \to y where x is in X and y is in Y.
g(y)=z is written g : y \to z where y is in Y and z is in Z.

The image* of y is z.

The preimage* (or fiber*) of z is the set of all y whose image is z and is denoted g^{-1}[z]

A picture is worth 1000 words

Covering space diagram2

A space Y is a covering space* (a fiber bundle) of space Z if the map g : y \to z is locally homeomorphicw.

A covering space is a universal covering space* if it is simply connected*.
The concept of a universal cover was first developed to define a natural domain for the analytic continuation* of an analytic functionw.
The general theory of analytic continuation and its generalizations are known as sheaf theory*.
The set of germs* can be considered to be the analytic continuation of an analytic function.

A topological space is (path-)connected* if no part of it is disconnected.

Torus cycles

Not simply connected

A space is simply connected* if there are no holes passing all the way through it (therefore any loop can be shrunk to a point)

See Homology*

Composition of morphisms:

g(f(x)) is written g \circ f
f is the pullback* of g
f is the lift* of g \circ f
? is the pushforward* of ?

A homomorphism* is a map from one set to another of the same type which preserves the operations of the algebraic structure:

f(x \cdot y) = f(x) \cdot f(y)
f(x + y) = f(x) + f(y)
See Cauchy's functional equation*
A Functor* is a homomorphism with a domain in one category and a codomain in another.
A group homomorphism* from (G, ∗) to (H, ·) is a function* h : GH such that
 h(u*v) = h(u) \cdot h(v) = h(c) for all u*v = c in G.
For example log(a*b) = log(a) + log(b)
Since log is a homomorphism that has an inverse that is also a homomorphism, log is an isomorphism* of groups.
See also group action* and group orbit*

A Multicategory* has morphisms with more than one source object.

A Multilinear map* f(v_1,\ldots,v_n) = W:

f\colon V_1 \times \cdots \times V_n \to W\text{,}

has a corresponding Linear mapw:F(v_1\otimes \cdots \otimes v_n) = W:

F\colon V_1 \otimes \cdots \otimes V_n \to W\text{,}

Back to top

Numerical methods

See also: Explicit and implicit methods*
From Wikipedia:Numerical analysis:

One of the simplest problems is the evaluation of a function at a given point.

The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient.

For polynomials, a better approach is using the Horner scheme*, since it reduces the necessary number of multiplications and additions.

Generally, it is important to estimate and control round-off errors* arising from the use of floating point* arithmetic.

Interpolation* solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?

Extrapolation* is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.


Regression* is also similar, but it takes into account that the data is imprecise.

Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function.

The least squares*-method is one popular way to achieve this.

Much effort has been put in the development of methods for solving systems of linear equations*.

Standard direct methods, i.e., methods that use some matrix decomposition*
Gaussian elimination*, LU decomposition*, Cholesky decomposition* for symmetricw (or hermitianw) and positive-definite matrixw, and QR decomposition* for non-square matrices.
Iterative methods*
Jacobi method*, Gauss–Seidel method*, successive over-relaxation* and conjugate gradient method* are usually preferred for large systems. General iterative methods can be developed using a matrix splitting*.

Root-finding algorithms* are used to solve nonlinear equations.

If the function is differentiablew and the derivative is known, then Newton's methodw is a popular choice.
Linearization* is another technique for solving nonlinear equations.

Optimizationw problems ask for the point at which a given function is maximized (or minimized).

Often, the point also has to satisfy some constraints*.


Differential equationw: If you set up 100 fans to blow air from one end of the room to the other and then you drop a feather into the wind, what happens?

The feather will follow the air currents, which may be very complex.

One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again.

This is called the Euler method* for solving an ordinary differential equation.

Back to top

Information theory

From Wikipedia:Information theory:

Information theory studies the quantification, storage, and communication of information.

Communications over a channel—such as an ethernet cable—is the primary motivation of information theory.

From Wikipedia:Quantities of information:

Shannon derived a measure of information content called the self-information* or "surprisal" of a message m:

 I(m)  = \log \left( \frac{1}{p(m)} \right)  =  - \log( p(m) ) \,

where p(m) = \mathrm{Pr}(M=m) is the probability that message m is chosen from all possible choices in the message space M. The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of bits*.

Information is transferred from a source to a recipient only if the recipient of the information did not already have the information to begin with. Messages that convey information that is certain to happen and already known by the recipient contain no real information. Infrequently occurring messages contain more information than more frequently occurring messages. This fact is reflected in the above equation - a certain message, i.e. of probability 1, has an information measure of zero. In addition, a compound message of two (or more) unrelated (or mutually independent) messages would have a quantity of information that is the sum of the measures of information of each message individually. That fact is also reflected in the above equation, supporting the validity of its derivation.

An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity).

The more surprising a message is the more information it conveys. The message "LLLLLLLLLLLLLLLLLLLLLLLLL" conveys exactly as much information as the message "25 L's". The first message which is 25 bytes long can therefore be "compressed" into the second message which is only 6 bytes long.

Back to top

Early computers,
See also: Time complexity*

Back to top

Tactical thinking

The prisoner's dilemma
A non-zero-sum game
Tactic X
Tactic Y
Tactic A
1, 1 5, -5
Tactic B
-5, 5 -5, -5
See also Wikipedia:Strategy (game theory)
From Wikipedia:Game theory:

In the accompanying example there are two players; Player one (blue) chooses the row and player two (red) chooses the column.

Each player must choose without knowing what the other player has chosen.

The payoffs are provided in the interior.

The first number is the payoff received by Player 1; the second is the payoff for Player 2.

Tit for tat is a simple and highly effective tactic in game theory for the iterated prisoner's dilemma.

An agent using this tactic will first cooperate, then subsequently replicate an opponent's previous action.

If the opponent previously was cooperative, the agent is cooperative.

If not, the agent is not.[53]

A zero-sum game
A 1,-1 -1,1
B -1,1 1,-1

In zero-sum games the sum of the payoffs is always zero (meaning that a player can only benefit at the expense of others).

Cooperation is impossible in a zero-sum game.

John Forbes Nash proved that there is a Nash equilibrium (an optimum tactic) for every finite game.

In the zero-sum game shown to the right the optimum tactic for player 1 is to randomly choose A or B with equal probability.

Strategic thinking differs from tactical thinking by taking into account how the short term goals and therefore optimum tactics change over time.

For example the opening, middlegame, and endgame of chess require radically different tactics.

Back to top


See also Galilean relativity*
From Wikipedia:Philosophiæ Naturalis Principia Mathematica:

Something is known Beyond a reasonable doubt if any doubt that it is true is unreasonable. A doubt is reasonable if it is consistent with the laws of cause and effect.

In the four rules, as they came finally to stand in the 1726 edition, Newton effectively offers a methodology for handling unknown phenomena in nature and reaching towards explanations for them.

Rule 1: We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
Rule 2: Therefore to the same natural effects we must, as far as possible, assign the same causes.
Rule 3: The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
Rule 4: In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, not withstanding any contrary hypothesis that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.

Classical mechanicsw

Newtonian mechanicsw, Lagrangian mechanicsw, and Hamiltonian mechanicsw
The difference between the net kinetic energy and the net potential energy is called the “Lagrangian.”
The action is defined as the time integral of the Lagrangian.
The Hamiltonian is the sum of the kinetic and potential energies.
Noether's theorem* states that every differentiable symmetry of the action* of a physical system has a corresponding conservation law*.
Twin paradox

Relativistic mechanics*

Special relativity*, and General relativity*
Energy is conserved in relativity and proper velocity is proportional to momentum at all velocities.

Quantum mechanicsw

Highly recommend:

Thinking Physics Is Gedanken Physics by Lewis Carroll Epstein

Back to top

Dimensional analysis

See Natural units*
From Wikipedia:Dimensional analysis:

Any physical law that accurately describes the real world must be independent of the units (e.g. km or mm) used to measure the physical variables.

Consequently, every possible commensurate equation for the physics of the system can be written in the form

a_0 \cdot D_0 = (a_1 \cdot D_1)^{p_1} (a_2 \cdot D_2)^{p_2}...(a_n \cdot D_n)^{p_n}

The dimension, Dn, of a physical quantity can be expressed as a product of the basic physical dimensions length (L), mass (M), time (T), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J), each raised to a rational power.

Suppose we wish to calculate the range of a cannonball* when fired with a vertical velocity component V_\mathrm{y} and a horizontal velocity component V_\mathrm{x}, assuming it is fired on a flat surface.

The quantities of interest and their dimensions are then

range as Lx
V_\mathrm{x} as Lx/T
V_\mathrm{y} as Ly/T
g as Ly/T2

The equation for the range may be written:

range = (V_x)^a (V_y)^b (g)^c


\mathsf{L}_\mathrm{x} = (\mathsf{L}_\mathrm{x}/\mathsf{T})^a\,(\mathsf{L}_\mathrm{y}/\mathsf{T})^b (\mathsf{L}_\mathrm{y}/\mathsf{T}^2)^c\,

and we may solve completely as a=1, b=1 and c=-1.

Back to top


See also: Periodic tablew

Uranium atom

The first pair of electrons fall into the ground shell. Once that shell is filled no more electrons can go into it. Any additional electrons go into higher shells.

The nucleus however works differently. The first few neutrons form the first shell. But any additional neutrons continue to fall into that same shell which continues to expand until there are 49 pairs of neutrons in that shell.

Back to top

See also

Back to top

Search Math wiki

External links


  1. Wikipedia:Generalization
  2. Wikipedia:Cartesian product
  3. Wikipedia:Tangent bundle
  4. Wikipedia:Lie group
  5. Wikipedia:Sesquilinear form
  6. Wikipedia:Tensor
  7. Wikipedia:Tensor (intrinsic definition)
  8. Wikipedia:Special unitary group
  9. Lawson, H. Blaine; Michelsohn, Marie-Louise (1989). Spin Geometry. Princeton University Pressw. ISBN 978-0-691-08542-5  page 14
  10. Friedrich, Thomas (2000), Dirac Operators in Riemannian Geometry, American Mathematical Societyw, ISBN 978-0-8218-2055-1  page 15
  11. Cite error: Invalid <ref> tag; no text was provided for refs named Flanders
  12. W. K. Clifford, "Preliminary sketch of bi-quaternions," Proc. London Math. Soc. Vol. 4 (1873) pp. 381-395
  13. W. K. Clifford, Mathematical Papers, (ed. R. Tucker), London: Macmillan, 1882.
  14. Wikipedia:Rotor (mathematics)
  15. Roger Penrose (2005). The road to reality: a complete guide to the laws of our universe. Knopf. pp. 203–206. 
  16. E. Meinrenken (2013), "The spin representation", Clifford Algebras and Lie Theory, Ergebnisse der Mathematik undihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics, 58, Springer-Verlag, doi:10.1007/978-3-642-36216-3_3 
  17. S.-H. Dong (2011), "Chapter 2, Special Orthogonal Group SO(N)", Wave Equations in Higher Dimensions, Springer, pp. 13–38 
  18. "Pauli matrices". Planetmath website. 28 March 2008. Retrieved 28 May 2013. 
  19. Oersted Medal Lecture David Hestenes "Reforming the Mathematical Language of Physics" (Am. J. Phys. 71 (2), February 2003, pp. 104–121) Online: p26
  20. The words map or mapping, transformation, correspondence, and operator are often used synonymously. Template:Harvnb.
  21. Andrew Marx, Shortcut Algebra I: A Quick and Easy Way to Increase Your Algebra I Knowledge and Test Scores, Publisher Kaplan Publishing, 2007, Template:ISBN, 9781419552885, 288 pages, page 51
  22. Wikipedia:Multiplicity (mathematics)
  23. Wikipedia:Partial fraction decomposition
  24. Wikipedia:Basic hypergeometric series
  25. Wikipedia:q-analog
  26. 26.0 26.1 ex = y = dy/dx
    dx = dy/y = 1/y * dy
    ∫ (1/y)dy = ∫ dx = x = ln(y)
  27. Wikipedia:Product rule
  28. Wikipedia:Monotonic function
  29. Wikipedia:Generalized Fourier series
  30. Wikipedia:Spherical harmonics
  31. Wikipedia:Inverse Laplace transform
  33. Wikipedia:Convolution theorem
  34. Wikipedia:RLC circuit
  35. Cite error: Invalid <ref> tag; no text was provided for refs named eFunda
  36. Cite error: Invalid <ref> tag; no text was provided for refs named edwards
  37. Cite error: Invalid <ref> tag; no text was provided for refs named cohen
  38. Wikipedia:Total derivative
  39. Wikipedia:Residue (complex analysis)
  40. Wikipedia:Potential theory
  41. Wikipedia:Harmonic conjugate
  42. Wikipedia:Calculus of variations
  43. Wikipedia:Center (algebra)
  44. Wikipedia:Cover (topology)
  45. Joshi p. 323
  46. Wikipedia:Permutation
  47. Wikipedia:derangement
  48. Wikipedia:rencontres numbers
  49. Wikipedia:Central limit theorem
  50. Bland, J.M.; Altman, D.G. (1996). "Statistics notes: measurement error". BMJ 312 (7047): 1654. doi:10.1136/bmj.312.7047.1654. PMC 2351401. PMID 8664723. // 
  51. Wikipedia:standard deviation
  52. Wikipedia:Hypergeometric distribution
  53. Wikipedia:Tit for tat