It has been known since the time of Euclid^{w} that all of geometry can be derived from a handful of objects (points, lines...), a few actions on those objects, and a small number of axoims^{w}. Every field of science likewise can be reduced to a small set of objects, actions, and rules. Math itself is not a single field but rather a constellation of related fields. One way in which new fields are created is by the process of generalization.
A generalization is the formulation of general concepts from specific instances by abstracting common properties. Generalization is the process of identifying the parts of a whole, as belonging to the whole.^{[1]}
Foreword:
Mathematical notation^{w} can be extremely intimidating. Wikipedia is full of articles with page after page of indecipherable text. At first glance this article might appear to be the same. I want to assure the reader that every effort has been made to simplify everything as much as possible while also providing links to articles with more indepth information.
The following has been assembled from countless small pieces gathered from throughout the world wide web. I cant guarantee that there are no errors in it. Please report any errors or omissions on this articles talk page.
Numbers
Scalars
 See: Peano axioms^{w} and Hyperoperation^{*}
The basis of all of mathematics is the "Next"^{*} function. See Graph theory^{w}. Next(0)=1, Next(1)=2, Next(2)=3, Next(3)=4. (We might express this by saying that One differs from nothing as two differs from one.) This defines the Natural numbers^{w} (denoted ). Natural numbers are those used for counting. See Tutorial:Counting.
 These have the convenient property of being transitive^{w}. That means that if a<b and b<c then it follows that a<c. In fact they are totally ordered^{w}. See Order theory^{*}.
Addition^{w} (See Tutorial:arithmetic) is defined as repeatedly calling the Next function, and its inverse is subtraction^{w}. But this leads to the ability to write equations like for which there is no answer among natural numbers. To provide an answer mathematicians generalize to the set of all integers^{w} (denoted ) which includes negative integers.
 The Additive identity^{w} is zero because x + 0 = x.
 0 is an idempotent^{*} element for addition since 0 + 0 = 0
 The absolute value or modulus of x is defined as
 Absolute value is an idempotent^{*} function since abs(abs(x)) = abs(x)
 Integers form a ring^{*} (denoted ) over the field of rational numbers. Ring^{w} is defined below.
 Z_{n} is used to denote the set of integers modulo n ^{*}.
 Modular arithmetic is essentially arithmetic in the quotient ring^{w} Z/nZ (which has n elements).
 An ideal^{*} is a special subset of a ring. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3.
 The study of integers is called Number theory^{w}.
 means a divides b.
 means a does not divide b.
 means p^{a} exactly divides n (i.e. p^{a} divides n but p^{a+1} does not).
 A prime number is a number that can only be divided by itself and one.
 If a, b, c, and d are primes then the Least common multiple^{w} of abc and c^{2}d is abc^{2}d. (See Tutorial:least common multiples)
Multiplication^{w} (See Tutorial:multiplication) is defined as repeated addition, and its inverse is division^{w}. But this leads to equations like for which there is no answer. The solution is to generalize to the set of rational numbers^{w} (denoted ) which include fractions (See Tutorial:fractions). Any number which isnt rational is irrational^{w}. See also padic number^{*}
 Rational numbers form a division algebra^{*}.
 The set of all rational numbers minus zero forms a multiplicative Group^{*}.
 The Multiplicative identity^{w} is one because x * 1 = x.
 Division by zero is undefined and undefinable^{w}. 1/0 exists nowhere on the complex plane^{w}. It does, however, exist on the Riemann sphere^{w} (often called the extended complex plane) where it is surprisingly well behaved. See also Wheel theory^{*} and L'Hôpital's rule^{w}.
 (Addition and multiplication are fast but division is slow even for computers^{*}.)
Exponentiation^{w} (See Tutorial:exponents) is defined as repeated multiplication, and its inverses are roots^{w} and logarithms^{w}. But this leads to multiple equations with no solutions:
 Equations like The solution is to generalize to the set of algebraic numbers^{w} (denoted ). See also algebraic integer^{*}. To see a proof that the square root of two is irrational see Square root of 2^{w}.
 Equations like The solution (because x is transcendental^{w}) is to generalize to the set of Real numbers^{w} (denoted ).
 Equations like and The solution is to generalize to the set of complex numbers^{w} (denoted ) by defining i = sqrt(1). A single complex number consists of a real part a and an imaginary part bi (See Tutorial:complex numbers). Imaginary numbers^{w} (denoted ) often occur in equations involving change with respect to time. If friction is resistance to motion then imaginary friction would be resistance to change of motion wrt time. (In other words, imaginary friction would be mass.) In fact, in the equation for the Spacetime interval^{w} (given below), time itself is an imaginary quantity.
 Complex numbers can be used to represent and perform rotations^{w} but only in 2 dimensions. Hypercomplex numbers^{w} like quaternions^{w} (denoted ), octonions^{w} (denoted ), and sedenions^{*} (denoted ) are one way to generalize complex numbers to some (but not all) higher dimensions.
 Splitcomplex numbers^{*} (hyperbolic complex numbers) are similar to complex numbers except that i^{2} = +1.
 The Complex conjugate^{w} of the complex number is (Not to be confused with the dual^{w} of a vector.)
 The complex number a+bi written in matrix notation^{w} is:
 (a+ib)(c+id) is
 The complex numbers are not ordered^{w}. However the absolute value^{w} or modulus^{*} of a complex number is:
 (See Determinant^{w})
 There are n solutions of
 0^0 = 1. See Empty product^{w}.
Tetration^{w} is defined as repeated exponentiation and its inverses are called superroot and superlogarithm.
When a quantity, like the charge of a single electron, becomes so small that it is insignificant we, quite justifiably, treat it as though it were zero. A quantity that can be treated as though it were zero, even though it very definitely is not, is called infinitesimal. If is a finite amount of charge then using Leibniz's notation^{w} would be an infinitesimal amount of charge. See Differential^{w}
Likewise when a quantity becomes so large that a regular finite quantity becomes insignificant then we call it infinite. We would say that the mass of the ocean is infinite . But compared to the mass of the Milky Way galaxy our ocean is insignificant. So we would say the mass of the Galaxy is doubly infinite .
Infinity and the infinitesimal are called Hyperreal numbers^{w} (denoted ). Hyperreals behave, in every way, exactly like real numbers. For example, is exactly twice as big as In reality, the mass of the ocean is a real number so it is hardly surprising that it behaves like one. See Epsilon numbers^{*} and Big O notation^{*}
Intervals
 [2,5[ or [2,5) denotes the interval^{w} from 2 to 5, including 2 but excluding 5.
 [3..7] denotes all integers from 3 to 7.
 The set of all reals is unbounded at both ends.
 An open interval does not include its endpoints.
 Compactness^{*} is a property that generalizes the notion of a subset being closed and bounded.
 The unit interval^{*} is the closed interval [0,1]. It is often denoted I.
 The unit square^{*} is a square whose sides have length 1.
 Often, "the" unit square refers specifically to the square in the Cartesian plane^{w} with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
 The unit disk^{*} in the complex plane is the set of all complex numbers of absolute value less than one and is often denoted
Vectors
 See also: Algebraic geometry^{*}, Algebraic variety^{*}, Scheme^{*}, Algebraic manifold^{*}, and Linear algebra^{w}
The one dimensional number line can be generalized to a multidimensional Cartesian coordinate system^{w} thereby creating multidimensional math (i.e. geometry^{w}).
For sets A and B, the Cartesian product A × B is the set of all ordered pairs^{w} (a, b) where a ∈ A and b ∈ B.^{[2]} The direct product^{*} generalizes the Cartesian product. (See also Direct sum^{*})
 is the Cartesian product^{w}
 is the Cartesian product^{w} (See Complexification^{*})
A vector space^{w} is a coordinate space^{w} with vector addition^{w} and scalar multiplication^{w} (multiplication of a vector and a scalar^{w} belonging to a field^{w}.
 If are orthogonal^{w} unit^{w} basis vectors^{*}
 and are arbitrary vectors then we can (and usually do) write:
 A module^{*} generalizes a vector space by allowing multiplication of a vector and a scalar belonging to a ring^{w}.
Coordinate systems define the length of vectors parallel to one of the axes but leave all other lengths undefined. This concept of "length" which only works for certain vectors is generalized as the "norm^{w}" which works for all vectors. The norm of vector is denoted The double bars are used to avoid confusion with the absolute value of the function.
 Taxicab metric^{w} (called L^{1} norm. See L^{p} space^{*}. Sometimes called Lebesgue spaces. See also Lebesgue measure^{w}.)
 In Euclidean space^{w} the norm (called L^{2} norm) doesnt depend on the choice of coordinate system. As a result, rigid objects can rotate in Euclidean space. See proof of the Pythagorean theorem^{w} to the right. L^{2} is the only Hilbert space^{*} among L^{p} spaces.
 In Minkowski space^{w} (See PseudoEuclidean space^{*}) the Spacetime interval^{w} is
 In complex space^{*} the most common norm of an n dimensional vector is obtained by treating it as though it were a regular real valued 2n dimensional vector in Euclidean space
 A Banach space^{*} is a normed vector space^{*} that is also a complete metric space^{w} (there are no points missing from it).
A manifold^{w} is a type of topological space^{w} in which each point has an infinitely small neighbourhood^{w} that is homeomorphic^{w} to Euclidean space^{w}. A manifold is locally, but not globally, Euclidean. A Riemannian metric^{*} on a manifold allows distances and angles to be measured.
 A Tangent space^{*} is the set of all vectors tangent to at point p.
 Informally, a tangent bundle^{*} (red cylinder in image to the right) on a differentiable manifold (blue circle) is obtained by joining all the tangent spaces^{*} (red lines) together in a smooth and nonoverlapping manner.^{[3]} The tangent bundle always has twice as many dimensions as the original manifold.
 A vector bundle^{*} is the same thing minus the requirement that it be tangent.
 A fiber bundle^{*} is the same thing minus the requirement that the fibers be vector spaces.
 The cotangent bundle^{*} (Dual bundle^{*}) of a differentiable manifold is obtained by joining all the cotangent spaces^{*} (pseudovector spaces).
 The cotangent bundle always has twice as many dimensions as the original manifold.
 Sections of that bundle are known as differential oneforms^{w}.
A Lie group^{*} is a group that is also a finitedimensional real smooth manifold, in which the group operation is multiplication rather than addition.^{[4]} n×n invertible matrices^{*} (See below) are a Lie group.
 A Lie algebra^{*} (See Infinitesimal transformation^{*}) is a local or linearized version of a Lie group.
 The Lie derivative^{w} generalizes the Lie bracket^{w} which generalizes the wedge product^{w} which is a generalization of the cross product^{w} which only works in 3 dimensions.
Multiplication of vectors
Multiplication can be generalized to allow for multiplication of vectors in 3 different ways:
Dot product^{w} (a Scalar^{w}):
 Strangely, only parallel components multiply.
 The dot product can be generalized to the bilinear form^{w} where A is an (0,2) tensor. (For the dot product A is the identity tensor).
 Two vectors are orthogonal if
 A bilinear form is symmetric if
 Its associated quadratic form^{*} is
 In Euclidean space
 The inner product^{w} is a generalization of the dot product to complex vector space.
 The 2 vectors are called "bra" and "ket"^{*}.
 A Hilbert space^{*} is an inner product space^{w} that is also a Complete metric space^{w}.
 The inner product can be generalized to (a sesquilinear form^{w})
 A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × V → C such that^{[5]}
 A is a Hermitian operator^{*} iff^{w} Often written as
 The curl operator, is Hermitian.
Outer product^{w} (a tensor^{w} called a dyadic^{w}):
 As one would expect, every component of one vector multipies with every component of the other vector.

 Taking the dot product of u⊗v and any vector x (See Visualization of Tensor multiplication^{w}) causes the components of x not pointing in the direction of v to become zero. What remains is then rotated from v to u.
 A rotation matrix can be constructed by summing three outer products. The first two sum to form a bivector. The third one rotates the axis of rotation zero degrees.
 The Tensor product^{w} generalizes the outer product^{w}.
Wedge product^{w} (a simple bivector^{w}):
 The wedge product is also called the exterior product^{w} (sometimes mistakenly called the outer product).
 The term "exterior" comes from the exterior product of two vectors not being a vector.
 Just as a vector has length and direction so a bivector has an area and an orientation.
 In three dimensions is a pseudovector^{w} and its dual^{w} is the cross product^{w}.
 The triple product^{w} a∧b∧c is a trivector which is a 3rd degree tensor.
 In 3 dimensions a trivector is a pseudoscalar so in 3 dimensions every trivector can be represented as a scalar times the unit trivector. See LeviCivita symbol^{w}
 The Matrix commutator^{w} generalizes the wedge product.
The dual^{w} of vector a is bivector ā:
Tensors
Multiplying a tensor and a vector results in a new vector that not only might have a different magnitude but might even point in a completely different direction:
Unless the tensor is the identity tensor in which case the vector is completely unchanged:
Some special cases:
Complex numbers can be used to represent and perform rotations^{w} but only in 2 dimensions.
Tensors^{w}, on the other hand, can be used in any number of dimensions to represent and perform rotations and other linear transformations^{w}. See Visualization of Tensor multiplication^{w} for a full explanation of how to multiply tensors and vectors.
 Any affine transformation^{w} is equivalent to a linear transformation followed by a translation^{w} of the origin. (The origin^{w} is always a fixed point for any linear transformation.) "Translation" is just a fancy word for "move".
Just as a vector is a sum of unit vectors multiplied by constants so a tensor is a sum of unit dyadics (See outer product above) multiplied by constants. Each dyadic can be thought of as a plane having an orientation and magnitude.
The order or degree of the tensor is the dimension of the tensor which is the total number of indices required to identify each component uniquely.^{[6]} A vector is a 1storder tensor.
A simple tensor is a tensor that can be written as a product of tensors of the form (See Outer Product below.) The rank of a tensor T is the minimum number of simple tensors that sum to T.^{[7]}
Linear groups
A square matrix^{w} of order n is an nbyn matrix. Any two square matrices of the same order can be added and multiplied. A matrix is invertible if and only if its determinant is nonzero.
GL_{n}(F) or GL(n, F), or simply GL(n) is the Lie group^{*} of n×n invertible matrices with entries from the field F. The group GL(n, F) and its subgroups are often called linear groups or matrix groups.
 SL(n, F) or SL_{n}(F), is the subgroup^{*} of GL(n, F) consisting of matrices with a determinant^{w} of 1.
 U(n), the Unitary group of degree n is the group^{w} of n × n unitary matrices^{w}. (More general unitary matrices may have complex determinants with absolute value 1, rather than real 1 in the special case.) The group operation is matrix multiplication^{w}.^{[8]}
 SU(n), the special unitary group of degree n, is the Lie group^{*} of n×n unitary matrices^{w} with determinant^{w} 1.
Symmetry groups
Aff(n,K): the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.
 E(n): rotations, reflections, and translations.
 O(n): rotations, reflections
 SO(n): rotations
 so(3) is the Lie algebra of SO(3) and consists of all skewsymmetric^{w} 3 × 3 matrices.
Rotations
In 4 spatial dimensions a rigid object can rotate in 2 different ways simultaneously^{*}.
 See also: Hypersphere of rotations^{*}
 See Rotation group SO(3)^{*}, Special unitary group^{*}, Plate trick^{*}, Spin representation^{*}, Spin group^{*}, Pin group^{*}, Spinor^{*}, Clifford algebra^{w}, Indefinite orthogonal group^{*}, Root system^{*}, Bivectors^{w}, Curl^{w}
From Wikipedia:Rotation group SO(3):
Consider the solid ball in R^{3} of radius π. For every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The two rotations through π and through −π are the same. So we identify^{*} (or "glue together") antipodal points^{*} on the surface of the ball.
The ball with antipodal surface points identified is a smooth manifold^{*}, and this manifold is diffeomorphic^{*} to the rotation group. It is also diffeomorphic to the real 3dimensional projective space^{*} RP^{3}, so the latter can also serve as a topological model for the rotation group.
These identifications illustrate that SO(3) is connected^{*} but not simply connected^{*}. As to the latter, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open".
Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The Balinese plate trick^{*} and similar tricks demonstrate this practically.
The same argument can be performed in general, and it shows that the fundamental group^{*} of SO(3) is cyclic group^{w} of order 2. In physics applications, the nontriviality of the fundamental group allows for the existence of objects known as spinors^{*}, and is an important tool in the development of the spinstatistics theorem^{*}.
The universal cover^{*} of SO(3) is a Lie group^{*} called Spin(3)^{*}. The group Spin(3) is isomorphic to the special unitary group^{*} SU(2); it is also diffeomorphic to the unit 3sphere^{*} S^{3} and can be understood as the group of versors^{*} (quaternions^{w} with absolute value^{w} 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotation^{*}. The map from S^{3} onto SO(3) that identifies antipodal points of S^{3} is a surjective^{*} homomorphism^{*} of Lie groups, with kernel^{*} {±1}. Topologically, this map is a twotoone covering map^{*}. (See the plate trick^{*}.)
Spin group
From Wikipedia:Spin group:
The spin group Spin(n)^{[9]}^{[10]} is the double cover^{*} of the special orthogonal group^{*} SO(n) = SO(n, R), such that there exists a short exact sequence^{*} of Lie groups^{*} (with n ≠ 2)
As a Lie group, Spin(n) therefore shares its dimension^{*}, n(n − 1)/2, and its Lie algebra^{*} with the special orthogonal group.
For n > 2, Spin(n) is simply connected^{*} and so coincides with the universal cover^{*} of SO(n)^{*}.
The nontrivial element of the kernel is denoted −1, which should not be confused with the orthogonal transform of reflection through the origin^{*}, generally denoted −I .
Spin(n) can be constructed as a subgroup^{*} of the invertible elements in the Clifford algebra^{w} Cl(n). A distinct article discusses the spin representations^{*}.
From Wikipedia:Spinor:
a spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360° (see picture). This property characterizes spinors. It is also possible to associate a substantially similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913.
Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966, first sentence of the Introduction section of the beginning of the book (before the page numbers start): "Spinors were first used under that name, by physicists, in the field of Quantum Mechanics. In their most general form, spinors were discovered in 1913 by the author of this work, in his investigations on the linear representations of simple groups*; they provide a linear representation of the group of rotations in a space with any number of dimensions, each spinor having components where or ." The star (*) refers to Cartan 1913.
(Note: is the number of simultaneous independent rotations^{*} an object can have in n dimensions.)
In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. More precisely, it is the fermions of spin1/2 that are described by spinors, which is true both in the relativistic and nonrelativistic theory. The wavefunction of the nonrelativistic electron has values in 2 component spinors transforming under threedimensional infinitesimal rotations. The relativistic Dirac equation^{*} for the electron is an equation for 4 component spinors transforming under infinitesimal Lorentz transformations for which a substantially similar theory of spinors exists.
Spinors form a vector space, usually over the complex numbers, equipped with a linear group representation of the spin group that does not factor through a representation of the group of rotations (see diagram). The spin group is the group of rotations keeping track of the homotopy class. Spinors are needed to encode basic information about the topology of the group of rotations because that group is not simply connected, but the simply connected spin group is its double cover. So for every rotation there are two elements of the spin group that represent it. Geometric vectors and other tensors cannot feel the difference between these two elements, but they produce opposite signs when they affect any spinor under the representation.
Multivectors
External links:
 A brief introduction to geometric algebra
 A brief introduction to Clifford algebra
 The Construction of Spinors in Geometric Algebra
 Functions of Multivector Variables
From Wikipedia:Multivector:
The wedge product^{w} operation (See Exterior algebra^{w}) used to construct multivectors is linear, associative and alternating, which reflect the properties of the determinant. This means for vectors u, v and w in a vector space V and for scalars α, β, the wedge product has the properties,
 Linear:
 Associative:
 Alternating:
However the wedge product is not invertible because many different pairs of vectors can have the same wedge product.
The product of p vectors, , is called a grade p multivector, or a pvector. The maximum grade of a multivector is the dimension of the vector space V.
The linearity of the wedge product allows a multivector to be defined as the linear combination of basis multivectors. There are (n
p) basis pvectors in an ndimensional vector space.^{[11]}
W. K. Clifford^{*} combined multivectors with the inner product^{w} defined on the vector space, in order to obtain a general construction for hypercomplex numbers that includes the usual complex numbers and Hamilton's quaternions^{w}.^{[12]}^{[13]}
The Clifford product between two vectors is linear and associative like the wedge product. But unlike the wedge product the Clifford product is invertible.
Clifford's relation preserves the alternating property for the product of vectors that are perpendicular. But in contrast to the wedge product, the Clifford product of a vector with itself is no longer zero. To see why consider the square (quadratic form^{*}) of a single vector:
From the Pythagorean theorem we know that:
Therefore Clifford deduced that:
And therefore that:
And i, as we already know, has the effect of rotating complex numbers.
For any 2 arbitrary vectors:
Applying Cliffords deductions we get:
For comparison here is the outer product of the same 2 vectors:
 (See divergence, curl, & gradient below)
This particular Clifford algebra is known as Cl_{2,0}. The subscript 2 indicates that the 2 basis vectors are square roots of +1. See Metric signature^{*}. If we had used then the result would have been Cl_{0,2}.
From Wikipedia:Clifford algebra:
Every nondegenerate quadratic form on a finitedimensional real vector space is equivalent to the standard diagonal form:
where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature^{*} of the quadratic form. The real vector space with this quadratic form is often denoted R^{p,q}. The Clifford algebra on R^{p,q} is denoted Cℓ_{p,q}(R). The symbol Cℓ_{n}(R) means either Cℓ_{n,0}(R) or Cℓ_{0,n}(R) depending on whether the author prefers positivedefinite or negativedefinite spaces.
A standard basis^{w} {e_{i}} for R^{p,q} consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. The algebra Cℓ_{p,q}(R) will therefore have p vectors that square to +1 and q vectors that square to −1.
Note that Cℓ_{0,0}(R) is naturally isomorphic to R since there are no nonzero vectors. Cℓ_{0,1}(R) is a twodimensional algebra generated by a single vector e_{1} that squares to −1, and therefore is isomorphic as an algebra (but not as a superalgebra^{*}) to C, the field of complex numbers. The algebra Cℓ_{0,2}(R) is a fourdimensional algebra spanned by {1, e_{1}, e_{2}, e_{1}e_{2}}. The latter three elements square to −1 and all anticommute, and so the algebra is isomorphic to the quaternions H. Cℓ_{0,3}(R) is an 8dimensional algebra isomorphic to the direct sum^{*} H ⊕ H called splitbiquaternions^{*}.
From Wikipedia:Spacetime algebra:
Spacetime algebra^{*} (STA) is a name for the Clifford algebra^{w} Cl_{3,1}(R), or equivalently the geometric algebra^{w} G(M^{4}), which can be particularly closely associated with the geometry of special relativity^{w} and relativistic spacetime^{w}. See also Algebra of physical space^{*}.
The spacetime algebra may be built up from an orthogonal basis of one timelike vector and three spacelike vectors, , with the multiplication rule
where is the Minkowski metric^{w} with signature (+ + + −).
Thus, , , otherwise .
The basis vectors share these properties with the Dirac matrices^{*}, but no explicit matrix representation need be used in STA.
Rotors
From Wikipedia:Geometric algebra
The inverse of a vector is:
The projection of onto (or the parallel part) is
and the rejection of from (or the orthogonal part) is
The reflection of a vector along a vector , or equivalently across the hyperplane orthogonal to , is the same as negating the component of a vector parallel to . The result of the reflection will be

If a is a unit vector then and
If we have a product of vectors then we denote the reverse as
Any rotation is equivalent to 2 reflections.
R is called a Rotor
Quaternions
From Wikipedia:Quaternion:
corresponds to a rotation of 180° in the plane containing σ_{1} and σ_{2}.
This is very similar to the corresponding quaternion formula,
In fact, the two are identical, if we make the identification
and it is straightforward to confirm that this preserves the Hamilton relations
In this picture, quaternions correspond not to vectors but to bivectors^{w} – quantities with magnitude and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbers^{w} becomes clearer, too: in 2D, with two vector directions σ_{1} and σ_{2}, there is only one bivector basis element σ_{1}σ_{2}, so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ_{1}σ_{2}, σ_{2}σ_{3}, σ_{3}σ_{1}, so three imaginaries.
The usefulness of quaternions for geometrical computations can be generalised to other dimensions, by identifying the quaternions as the even part Cℓ^{+}_{3,0}(R) of the Clifford algebra^{w} Cℓ_{3,0}(R).
There are at least two ways of representing quaternions as matrices^{w} in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplication^{w}.
Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as
Using 4 × 4 real matrices, that same quaternion can be written as
Spinors
 External link:An introduction to spinors
Spinors may be regarded as nonnormalised rotors in which the reverse rather than the inverse is used in the sandwich product.^{[14]}
From Wikipedia:Clifford_algebra#Spinors:
Clifford algebras Cℓ_{p,q}(C), with p + q = 2n even, are matrix algebras which have a complex representation of dimension 2^{n}. By restricting to the group Pin_{p,q}(R) we get a complex representation of the Pin group of the same dimension, called the spin representation^{*}. If we restrict this to the spin group Spin_{p,q}(R) then it splits as the sum of two half spin representations (or Weyl representations) of dimension 2^{n−1}.
If p + q = 2n + 1 is odd then the Clifford algebra Cℓ_{p,q}(C) is a sum of two matrix algebras, each of which has a representation of dimension 2^{n}, and these are also both representations of the Pin group Pin_{p,q}(R). On restriction to the spin group Spin_{p,q}(R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2^{n}.
More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras^{*}: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see the article on spinors^{*}.
From Wikipedia:Tensor#Spinors:
When changing from one orthonormal basis^{*} (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected^{*} (see orientation entanglement^{*} and plate trick^{*}): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.^{[15]} A spinor^{*} is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.^{[16]}^{[17]}
Succinctly, spinors are elements of the spin representation^{*} of the rotation group, while tensors are elements of its tensor representations^{*}. Other classical groups^{*} have tensor representations, and so also tensors that are compatible with the group, but all noncompact classical groups have infinitedimensional unitary representations as well.
Pauli matrices
The Pauli matrices are a set of "gamma" matrices in dimension 3 with metric of Euclidean signature (3,0). The Pauli matrices are a set of three 2 × 2 complex^{w} matrices^{w} which are Hermitian^{w} and unitary^{w}.^{[18]} They are
commutation^{w} relations:
anticommutation^{*} relations:
Adding the commutator to the anticommutator gives:
If is identified with the pseudoscalar then the right hand side becomes which is also the definition for the product of two vectors in geometric algebra.
Exponential of a Pauli vector:
The real linear span of {I, iσ_{1}, iσ_{2}, iσ_{3}} is isomorphic to the real algebra of quaternions^{w} ℍ. The isomorphism from ℍ to this set is given by the following map (notice the reversed signs for the Pauli matrices):
Quaternions form a division algebra^{*}—every nonzero element has an inverse—whereas Pauli matrices do not.
Maxwell's equations
From Wikipedia:Mathematical descriptions of the electromagnetic field
Analogous to the tensor formulation, two objects, one for the field and one for the current, are introduced. In geometric algebra^{w} (GA) these are multivectors^{w}. The field multivector, known as the Riemann–Silberstein vector^{*}, is
and the current multivector is
where, in the algebra of physical space^{*} (APS) with the vector basis . The unit pseudoscalar^{*} is (assuming an orthonormal basis^{*}). Orthonormal basis vectors share the algebra of the Pauli matrices^{*}, but are usually not equated with them. After defining the derivative
Maxwell's equations are reduced to the single equation^{[19]}
In three dimensions, the derivative has a special structure allowing the introduction of a cross product:
from which it is easily seen that Gauss's law is the scalar part, the Ampère–Maxwell law is the vector part, Faraday's law is the pseudovector part, and Gauss's law for magnetism is the pseudoscalar part of the equation. After expanding and rearranging, this can be written as
We can identify APS as a subalgebra of the spacetime algebra^{*} (STA) , defining and . The s have the same algebraic properties of the gamma matrices^{*} but their matrix representation is not needed. The derivative is now
The Riemann–Silberstein becomes a bivector
and the charge and current density become a vector
Owing to the identity
Maxwell's equations reduce to the single equation
Functions
From Wikipedia:Function (mathematics)
In mathematics, a function^{[20]} is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that relates each real number x to its square x^{2}. The output of a function f corresponding to an input x is denoted by f(x) (read "f of x"). In this example, if the input is −3, then the output is 9, and we may write f(−3) = 9. Likewise, if the input is 3, then the output is also 9, and we may write f(3) = 9. (The same output may be produced by more than one input, but each input gives only one output.) The input variable(s) are sometimes referred to as the argument(s) of the function.
Euclids "common notions"
Things that do not differ from one another are equal to one another
a=a 
Things that are equal to the same thing are also equal to one another
If 
 then a=c 
If equals are added to equals, then the wholes are equal
If 
 then a+c=b+d 
If equals are subtracted from equals, then the remainders are equal
If 
 then ac=bd 
The whole is greater than the part.
If  b≠0  then a+b>a 
Elementary algebra
Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (nonspecified) numbers.
Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition^{w}, subtraction^{w}, multiplication^{w}, division^{w} and exponentiation^{w}). For example,
 Added terms are simplified using coefficients. For example, can be simplified as (where 3 is a numerical coefficient).
 Multiplied terms are simplified using exponents. For example, is represented as
 Like terms are added together,^{[21]} for example, is written as , because the terms containing are added together, and, the terms containing are added together.
 Brackets can be "multiplied out", using the distributive property^{w}. For example, can be written as which can be written as
 Expressions can be factored. For example, , by dividing both terms by can be written as
For any function , if then:
One must be careful though when squaring both sides of an equation since this can result is solutions that dont satisfy the original equation.
 yet
A function is an even function^{w} if f(x) = f(x)
A function is an odd function^{w} if f(x) = f(x)
Trigonometry
The law of cosines^{w} reduces to the Pythagorean theorem^{w} when gamma=90 degrees
The law of sines^{w} (also known as the "sine rule") for an arbitrary triangle states:
where is the area of the triangle
The law of tangents^{w}:
Right triangles
A right triangle is a triangle with gamma=90 degrees.
For small values of x, sin x ≈ x. (If x is in radians).
SOH → sin = "opposite" / "hypotenuse" CAH → cos = "adjacent" / "hypotenuse" TOA → tan = "opposite" / "adjacent" 
= sin A = a/c = cos A = b/c = tan A = a/b 
Hyperbolic functions
 See also: Hyperbolic angle^{*}
Hyperbolic functions^{w} are analogs of the ordinary trigonometric, or circular, functions.
 Hyperbolic sine:
 Hyperbolic cosine:
 Hyperbolic tangent:
 Hyperbolic cotangent:
 Hyperbolic secant:
 Hyperbolic cosecant:
Polynomials
 See Runge's phenomenon^{*}, Polynomial ring^{*}, System of polynomial equations^{*}, Rational root theorem^{*}, Descartes' rule of signs^{*}, and Complex conjugate root theorem^{*}
 From Wikipedia:Polynomial:
A polynomial^{w} can always be written in the form
where are constants called coefficients and n is the degree^{w} of the polynomial.
 A linear polynomial^{*} is a polynomial of degree one.
Each individual term^{*} is the product of the coefficient^{*} and a variable raised to a nonnegative integer power.
 A monomial^{*} has only one term.
 A binomial^{*} has 2 terms.
Fundamental theorem of algebra^{*}:
 Every singlevariable, degree n polynomial with complex coefficients has exactly n complex roots^{w}.
 However, some or even all of the roots might be the same number.
 A root (or zero) of a function is a value of x for which Z(x)=0.
 If then z_{2} is a root of multiplicity^{*} k.^{[22]} z_{2} is a root of multiplicity k1 of the derivative (Derivative is defined below) of Z(x).
 If k=1 then z_{2} is a simple root.
 The graph is tangent to the x axis at the multiple roots of f and not tangent at the simple roots.
 The graph crosses the xaxis at roots of odd multiplicity and bounces off (not goes through) the xaxis at roots of even multiplicity.
 Near x=z_{2} the graph has the same general shape as
 The roots of the formula are given by the Quadratic formula^{w}:
See Completing the square^{w}
 This is a parabola shifted to the right h units, stretched by a factor of a, and moved upward k units.
 k is the value at x=h and is either the maximum or the minimum value.
 Where See Binomial coefficient^{w}
The polynomial remainder theorem^{w} states that the remainder of the division of a polynomial Z(x) by the linear polynomial xa is equal to Z(a). See Ruffini's rule^{*}.
Determining the value at Z(a) is sometimes easier if we use Horner's method^{*} (synthetic division^{*}) by writing the polynomial in the form
A monic polynomial^{*} is a one variable polynomial in which the leading coefficient is equal to 1.
Rational functions
A rational function^{*} is a function of the form
It has n zeros^{w} and m poles^{w}. A pole is a value of x for which f(x) = infinity.
 The vertical asymptotes^{w} are the poles of the rational function.
 If n<m then f(x) has a horizontal asymptote at the x axis
 If n=m then f(x) has a horizontal asymptote at k.
 If n>m then f(x) has no horizontal asymptote.
 Given two polynomials and , where the p_{i} are distinct constants and deg Z < m, partial fractions^{w} are generally obtained by supposing that
 and solving for the c_{i} constants, by substitution, by equating the coefficients^{*} of terms involving the powers of x, or otherwise.
 (This is a variant of the method of undetermined coefficients^{*}.)^{[23]}
 If the degree of Z is not less than m then use long division to divide P into Z. The remainder then replaces Z in the equation above and one proceeds as before.
 If then
A Generalized hypergeometric series^{*} is given by
 where c_{0}=1 and
The function f(x) has n zeros and m poles.
 Basic hypergeometric series^{*}, or hypergeometric qseries, are qanalogue^{*} generalizations of generalized hypergeometric series.^{[24]}
 Roughly speaking a qanalog^{*} of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as q → 1^{[25]}
 We define the qanalog of n, also known as the qbracket or qnumber of n, to be
 one may define the qanalog of the factorial^{w}, known as the qfactorial^{*}, by
 Elliptic hypergeometric series^{*} are generalizations of basic hypergeometric series.
 An elliptic function is a meromorphic function that is periodic in two directions.
A generalized hypergeometric function^{*} is given by
So for e^{x} (see below) we have:
Integration and differentiation
 See also: Hyperreal number^{w} and Implicit differentiation^{w}
The integral^{w} is a generalization of multiplication.
 For example: a unit mass dropped from point x_{2} to point x_{1} will release energy.
 The usual equation is is a simple multiplication:
 But that equation cant be used if the strength of gravity is itself a function of x.
 The strength of gravity at x_{1} would be different than it is at x_{2}.
 And in reality gravity really does depend on x (x is the distance from the center of the earth):
 (See inversesquare law^{w}.)
 However, the corresponding Definite integral^{w} is easily solved:
The rules for solving it are surprisingly simple F(x) is called the indefinite integral^{w}. (antiderivative^{w})
k and y are arbitrary constants:
(Units (feet, mm...) behave exactly like constants.)
And most conveniently :
 The integral of a function is equal to the area under the curve.
 When the "curve" is a constant (in other words, k•x^{0}) then the integral reduces to ordinary multiplication.
The derivative^{w} is a generalization of division.
The derivative of the integral of f(x) is just f(x).
The derivative of a function at any point is equal to the slope of the function at that point.
The equation of the line tangent to a function at point a is
The Lipschitz constant^{w} of a function is a real number for which the absolute value of the slope of the function at every point is not greater than this real number.
The derivative of f(x) where f(x) = k•x^{y} is
 The derivative of a is
 The integral of is ln(x)^{[26]}. See natural log^{w}
Chain rule^{w} for the derivative of a function of a function:
The Chain rule for a function of 2 functions:
 (See "partial derivatives" below)
The Product rule^{w} can be considered a special case of the chain rule^{w} for several variables^{[27]}
Product rule^{w}:
 (because is negligible)
General Leibniz rule^{*}:
By the chain rule:
Therefore the Quotient rule^{w}:
There is a chain rule for integration but the inner function must have the form so that its derivative and therefore
Actually the inner function can have the form so that its derivative and therefore provided that all factors involving x cancel out.
The product rule for integration is called Integration by parts^{w}
One can use partial fractions^{w} or even the Taylor series^{w} to convert difficult integrals into a more manageable form.
The fundamental theorem of Calculus is:
The fundamental theorem of calculus is just the particular case of the Leibniz integral rule^{*}:
In calculus, a function f defined on a subset of the real numbers with real values is called monotonic^{*} if and only if it is either entirely nonincreasing, or entirely nondecreasing.^{[28]}
A differential form^{w} is a generalisation of the notion of a differential^{w} that is independent of the choice of coordinate system^{*}. f(x,y) dx ∧ dy is a 2form in 2 dimensions (an area element). The derivative^{w} operation on an nform is an n+1form; this operation is known as the exterior derivative^{w}. By the generalized Stokes' theorem^{w}, the integral of a function over the boundary of a manifold^{w} is equal to the integral of its exterior derivative on the manifold itself.
Taylor & Maclaurin series
If we know the value of a smooth function^{w} at x=0 (smooth means all its derivatives are continuous^{w}) and we also know the value of all of its derivatives at x=0 then we can determine the value at any other point x by using the Maclaurin series^{w}. ("!" means factorial^{w})
The proof of this is actually quite simple. Plugging in a value of x=0 causes all terms but the first to become zero. So, assuming that such a function exists, a_{0} must be the value of the function at x=0. Simply differentiate both sides of the equation and repeat for the next term. And so on.
 The Taylor series^{w} generalizes this formula.
 An analytic function^{w} is a function whose Taylor series converges for every z_{0} in its domain^{w}; analytic functions are infinitely differentiable^{w}.
 Any vector g = (z_{0}, α_{0}, α_{1}, ...) is a germ^{*} if it represents a power series of an analytic function^{w} around z_{0} with some radius of convergence r > 0.
 The set of germs is a Riemann surface^{w}.
 Riemann surfaces are the objects on which multivalued functions become singlevalued.
 A connected component^{*} of (i.e., an equivalence class) is called a sheaf^{*}.
We can easily determine the Maclaurin series expansion of the exponential function^{w} (because it is equal to its own derivative).^{[26]}
 The above holds true even if x is a matrix. See Matrix exponential^{*}
And cos(x)^{w} and sin(x)^{w} (because cosine is the derivative of sine which is the derivative of cosine)
It then follows that and therefore See Euler's formula^{w}
 x is the angle in radians^{*}.
 This makes the equation for a circle in the complex plane, and by extension sine and cosine, extremely simple and easy to work with especially with regard to differentiation and integration.
 Differentiation and integration are replaced with multiplication and division. Calculus is replaced with algebra. Therefore any expression that can be represented as a sum of sine waves can be easily differentiated or integrated.
Fourier Series
The Maclaurin series cant be used for a discontinuous function like a square wave because it is not differentiable. (Distributions^{*} make it possible to differentiate functions whose derivatives do not exist in the classical sense. See Generalized function^{*}.)
But remarkably we can use the Fourier series^{w} to expand it or any other periodic function^{w} into an infinite sum of sine waves each of which is fully differentiable^{w}!
 The reason this works is because sine and cosine are orthogonal functions^{*}.
 That means that multiplying any 2 sine waves of frequency n and frequency m and integrating over one period will always equal zero unless n=m.
 See the graph of sin^{2}(x) to the right.
 See Amplitude_modulation^{*}
 And of course ∫ f_{n}*(f_{1}+f_{2}+f_{3}+...) = ∫ (f_{n}*f_{1}) + ∫ (f_{n}*f_{2}) + ∫ (f_{n}*f_{3}) +...
 The complex form of the Fourier series uses complex exponentials instead of sine and cosine and uses both positive and negative frequencies (clockwise and counter clockwise) whose imaginary parts cancel.
 The complex coefficients encode both amplitude and phase and are complex conjugates of each other.
 where the dot between x and ν indicates the inner product^{w} of R^{n}.
 A 2 dimensional Fourier series is used in video compression.
 A discrete Fourier transform^{*} can be computed very efficiently by a fast Fourier transform^{*}.
 In mathematical analysis, many generalizations of Fourier series have proven to be useful.
 They are all special cases of decompositions over an orthonormal basis of an inner product space.^{[29]}
 Spherical harmonics^{*} are a complete set of orthogonal functions on the sphere, and thus may be used to represent functions defined on the surface of a sphere, just as circular functions (sines and cosines) are used to represent functions on a circle via Fourier series.^{[30]}
 Spherical harmonics are basis functions^{*} for SO(3). See Laplace series^{w}.
 Every continuous function in the function space can be represented as a linear combination^{*} of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors.
 Every quadratic polynomial can be written as a1+bt+ct^{2}, that is, as a linear combination of the basis functions 1, t, and t^{2}.
Transforms
Fourier transforms^{w} generalize Fourier series to nonperiodic functions like a single pulse of a square wave.
The more localized in the time domain (the shorter the pulse) the more the Fourier transform is spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle^{w}.
The Fourier transform of the Dirac delta function^{w} gives G(f)=1
 Laplace transforms^{w} generalize Fourier transforms to complex frequency .
 Complex frequency includes a term corresponding to the amount of damping.
 , (assuming a > 0)
 The inverse Laplace transform^{w} is given by
 where the integration is done along the vertical line Re(s) = γ in the complex plane^{w} such that γ is greater than the real part of all singularities^{*} of F(s) and F(s) is bounded on the line, for example if contour path is in the region of convergence^{*}.
 If all singularities are in the left halfplane, or F(s) is an entire function^{*} , then γ can be set to zero and the above inverse integral formula becomes identical to the inverse Fourier transform^{*}.^{[31]}
 Integral transforms^{w} generalize Fourier transforms to other kernals^{w} (besides sine^{w} and cosine^{w})
 Cauchy kernel =
 Hilbert kernel =
 Poisson Kernel:
 For the ball of radius r, , in R^{n}, the Poisson kernel takes the form:
 where , (the surface of ), and is the surface area of the unit nsphere^{*}.
 unit disk (r=1) in the complex plane:^{[32]}
 Dirichlet kernel
The convolution^{*} theorem states that^{[33]}
where denotes pointwise multiplication. It also works the other way around:
By applying the inverse Fourier transform , we can write:
and:
This theorem also holds for the Laplace transform^{w}.
The Hilbert transform^{*} is a multiplier operator^{*}. The multiplier of H is σ_{H}(ω) = −i sgn(ω) where sgn is the signum function^{*}. Therefore:
where denotes the Fourier transform^{w}.
Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of .
By Euler's formula^{w},
Therefore, H(u)(t) has the effect of shifting the phase of the negative frequency^{*} components of u(t) by +90° (π/2 radians) and the phase of the positive frequency components by −90°.
And i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation.
In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear timeinvariant system (LTI).
At any given moment, the output is an accumulated effect of all the prior values of the input function
Differential equations
 See also: Variation of parameters^{*}
Simple harmonic motion^{*} of a mass on a spring is a secondorder linear ordinary differential equation^{w}.
where m is the inertial mass, x is its displacement from the equilibrium, and k is the spring constant.
Solving for x produces
A is the amplitude (maximum displacement from the equilibrium position), is the angular frequency^{w}, and φ is the phase.
Energy passes back and forth between the potential energy in the spring and the kinetic energy of the mass.
The important thing to note here is that the frequency of the oscillation depends only on the mass and the stiffness of the spring and is totally independent of the amplitude.
That is the defining characteristic of resonance.
Kirchhoff's voltage law^{*} states that the sum of the emfs in any closed loop of any electronic circuit is equal to the sum of the voltage drops^{*} in that loop.^{[34]}
V is the voltage, R is the resistance, L is the inductance, C is the capacitance.
I = dQ/dt is the current.
It makes no difference whether the current is a small number of charges moving very fast or a large number of charges moving slowly.
In reality the latter is the case^{*}.
If V(t)=0 then the only solution to the equation is the transient response which is a rapidly decaying sine wave with the same frequency as the resonant frequency of the circuit.
 Like a mass (inductance) on a spring (capacitance) the circuit will resonate at one frequency.
 Energy passes back and forth between the capacitor and the inductor with some loss as it passes through the resistor.
If V(t)=sin(t) from ∞ to +∞ then the only solution is a sine wave with the same frequency as V(t) but with a different amplitude and phase.
If V(t) is zero until t=0 and then equals sin(t) then I(t) will be zero until t=0 after which it will consist of the steady state response plus a transient response.
From Wikipedia:Characteristic equation (calculus):
Starting with a linear homogeneous differential equation with constant coefficients ,
it can be seen that if , each term would be a constant multiple of . This results from the fact that the derivative of the exponential function^{w} is a multiple of itself. Therefore, , , and are all multiples. This suggests that certain values of will allow multiples of to sum to zero, thus solving the homogeneous differential equation.^{[35]} In order to solve for , one can substitute and its derivatives into the differential equation to get
Since can never equate to zero, it can be divided out, giving the characteristic equation
By solving for the roots, , in this characteristic equation, one can find the general solution to the differential equation.^{[36]}^{[37]} For example, if is found to equal to 3, then the general solution will be , where is an arbitrary constant^{w}.
Partial derivatives
Partial derivatives^{w} and multiple integrals^{w} generalize derivatives and integrals to multiple dimensions.
The partial derivative with respect to one variable is found by simply treating all other variables as though they were constants.
Multiple integrals are found the same way.
Let f(x, y, z) be a scalar function (for example electric potential energy or temperature).
 A 2 dimensional example of a scalar function would be an elevation map.
 (Contour lines of an elevation map are an example of a level set^{*}.)
The Gradient^{w} of f(x, y, z) is a vector field whose value at each point is a vector (technically its a covector^{w} because it has units of distance^{−1}) that points "downhill" with a magnitude equal to the slope^{w} of the function at that point.
You can think of it as how much the function changes per unit distance.
For static (unchanging) fields the Gradient of the electric potential is the electric field^{w} itself.
The gradient of temperature gives heat flow.
The Divergence^{w} of a vector field is a scalar.
The divergence of the electric field is nonzero wherever there is electric charge^{w} and zero everywhere else.
Field lines^{w} begin and end at charges because the charges create the electric field.
The Laplacian^{w} is the divergence of the gradient of a function:
 elliptic operators^{*} generalize the Laplacian.
The Curl^{w} of a vector field describes how much the vector field is twisted.
(The field may even go in circles.)
The curl at a certain point of a magnetic field^{w} is the current^{w} vector at that point because current creates the magnetic field^{w}.
In 3 dimensions the dual of the current vector is a bivector.
In 2 dimensions this reduces to a single scalar
The curl of the gradient of any scalar field is always zero.
The curl of a vector field in 4 dimensions would no longer be a vector. It would be a bivector. However the curl of a bivector field in 4 dimensions would still be a vector.
See also: differential forms^{*}.
The Gradient^{w} of a vector field is a tensor field. Each row is the gradient of the corresponding scalar function:
 Remember that because rotation from y to x is the negative of rotation from x to y.
Partial differential equations can be classified as parabolic^{*}, hyperbolic^{*} and elliptic^{*}.
The total derivative^{w} of f(x(t), y(t)) with respect to t is^{[38]}
And the differential^{w} is
The line integral^{w} along a 2D vector field is:

Green's theorem^{w} states that if you want to know how many field lines cross (or run parallel to) the boundary of a given region then you can either perform a line integral or you can simply count the number of charges (or the amount of current) within that region. See Divergence theorem^{w}
In 2 dimensions this is
Green's theorem is perfectly obvious when dealing with vector fields but is much less obvious when applied to complex valued functions in the complex plane.
In the complex plane
 External link: http://www.solitaryroad.com/c606.html
The formula for the derivative of a complex function f at a point z_{0} is the same as for a real function:
Every complex function can be written in the form
Because the complex plane is two dimensional, z can approach z_{0} from an infinite number of different directions.
However, if within a certain region, the function f is holomorphic^{w} (that is, complex differentiable^{w}) then, within that region, it will only have a single derivative whose value does not depend on the direction in which z approaches z_{0} despite the fact that f_{x} and f_{y} each have 2 partial derivatives. One in the x and one in the y direction..

This is only possible if the Cauchy–Riemann conditions^{w} are true.
An entire function^{*}, also called an integral function, is a complexvalued function that is holomorphic at all finite points over the whole complex plane.
As with real valued functions, a line integral of a holomorphic function depends only on the starting point and the end point and is totally independant of the path taken.
The starting point and the end point for any loop are the same. This, of course, implies Cauchy's integral theorem^{w} for any holomorphic function f:
Therefore curl and divergence must both be zero for a function to be holomorphic.
Green's theorem^{w} for functions (not necessarily holomorphic) in the complex plane:
Computing the residue^{w} of a monomial^{[39]}
 where is the circle with radius therefore and
The last term in the equation above equals zero when r=0. Since its value is independent of r it must therefore equal zero for all values of r.
Cauchy's integral formula^{w} states that the value of a holomorphic function within a disc is determined entirely by the values on the boundary of the disc.
Divergence can be nonzero outside the disc.
Cauchy's integral formula can be generalized to more than two dimensions.
Which gives:
 Note that n does not have to be an integer. See Fractional calculus^{*}.
The Taylor series becomes:
The Laurent series^{*} for a complex function f(z) about a point z_{0} is given by:
The positive subscripts correspond to a line integral around the outer part of the annulus and the negative subscripts correspond to a line integral around the inner part of the annulus. In reality it makes no difference where the line integral is so both line integrals can be moved until they correspond to the same contour gamma. See also: Ztransform^{*}
The function has poles at z=1 and z=2. It therefore has 3 different Laurent series centered on the origin (z_{0} = 0):
 For 0 < z < 1 the Laurent series has only positive subscripts and is the Taylor series.
 For 1 < z < 2 the Laurent series has positive and negative subscripts.
 For 2 < z the Laurent series has only negative subscripts.
Cauchy formula for repeated integration^{*}:
For every holomorphic function^{w} both f_{x} and f_{y} are harmonic functions^{w}.
Any twodimensional harmonic function is the real part of a complex analytic function^{w}.
See complex analysis^{w}.^{[40]}
 f_{y} is the harmonic conjugate^{*} of f_{x}.
 Geometrically f_{x} and f_{y} are related as having orthogonal trajectories, away from the zeroes of the underlying holomorphic function; the contours on which f_{x} and f_{y} are constant (equipotentials^{*} and streamlines^{*}) cross at right angles.
 In this regard, f_{x}+if_{y} would be the complex potential, where f_{x} is the potential function^{*} and f_{y} is the stream function^{*}.^{[41]}
 f_{x} and f_{y} are both solutions of Laplace's equation^{w} so divergence of the gradient is zero
 Legendre functions^{*} are solutions to Legendre's differential equation.
 This ordinary differential equation is frequently encountered when solving Laplace's equation (and related partial differential equations) in spherical coordinates.
 A harmonic function^{w} is a scalar potential function therefore the curl of the gradient will also be zero.
 See Potential theory
 Harmonic functions are real analogues to holomorphic functions.
 All harmonic functions are analytic, i.e. they can be locally expressed as power series.
 This is a general fact about elliptic operators^{*}, of which the Laplacian is a major example.
 The value of a harmonic function at any point inside a disk is a weighted average^{*} of the value of the function on the boundary of the disk.
 The Poisson kernel^{*} gives different weight to different points on the boundary except when x=0.
 The value at the center of the disk (x=0) equals the average of the equally weighted values on the boundary.
 All locally integrable functions satisfying the meanvalue property are both infinitely differentiable and harmonic.
 The kernel itself appears to simply be 1/r^n shifted to the point x and multiplied by different constants.
 For a circle (K = Poisson Kernel):

Geometric calculus
 See also: Geometric_algebra#Geometric_calculus^{*}
From Wikipedia:Geometric calculus^{w}:
Geometric calculus extends the geometric algebra to include differentiation and integration. The formalism is powerful and can be shown to encompass other mathematical theories including differential geometry and differential forms.
With a geometric algebra given, let a and b be vectors^{*} and let F(a) be a multivector^{w}valued function. The directional derivative^{w} of F(a) along b is defined as
provided that the limit exists, where the limit is taken for scalar ε. This is similar to the usual definition of a directional derivative but extends it to functions that are not necessarily scalarvalued.
Next, choose a set of basis vector^{w}s and consider the operators, noted , that perform directional derivatives in the directions of :
Then, using the Einstein summation notation^{*}, consider the operator :
which means:
or, more verbosely:
It can be shown that this operator is independent of the choice of frame, and can thus be used to define the geometric derivative:
This is similar to the usual definition of the gradient^{w}, but it, too, extends to functions that are not necessarily scalarvalued.
It can be shown that the directional derivative is linear regarding its direction, that is:
From this follows that the directional derivative is the inner product of its direction by the geometric derivative. All needs to be observed is that the direction can be written , so that:
For this reason, is often noted .
The standard order of operations^{w} for the geometric derivative is that it acts only on the function closest to its immediate right. Given two functions F and G, then for example we have
Although the partial derivative exhibits a product rule^{w}, the geometric derivative only partially inherits this property. Consider two functions F and G:
Since the geometric product is not commutative^{w} with in general, we cannot proceed further without new notation. A solution is to adopt the overdot^{*} notation, in which the scope of a geometric derivative with an overdot is the multivectorvalued function sharing the same overdot. In this case, if we define
then the product rule for the geometric derivative is
Let F be an rgrade multivector. Then we can define an additional pair of operators, the interior and exterior derivatives,
In particular, if F is grade 1 (vectorvalued function), then we can write
and identify the divergence^{w} and curl^{w} as
Note, however, that these two operators are considerably weaker than the geometric derivative counterpart for several reasons. Neither the interior derivative operator nor the exterior derivative operator is invertible^{*}.
The reason for defining the geometric derivative and integral as above is that they allow a strong generalization of Stokes' theorem^{w}. Let be a multivectorvalued function of rgrade input A and general position x, linear in its first argument. Then the fundamental theorem of geometric calculus relates the integral of a derivative over the volume V to the integral over its boundary:
As an example, let for a vectorvalued function F(x) and a (n1)grade multivector A. We find that
and likewise
Thus we recover the divergence theorem^{w},
Calculus of variations
 Calculus of variations^{*}, Functional^{*}, Functional analysis^{*}, Higherorder function^{*}
Whereas calculus is concerned with infinitesimal changes of variables, calculus of variations is concerned with infinitesimal changes of the underlying function itself.
Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals.
A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action.^{[42]}
Discrete mathematics
Groups and rings
 Main articles: Algebraic structure^{w}, Abstract algebra^{w}, and group theory^{*}
Addition and multiplication can be generalized in so many ways that mathematicians have created a whole system of categories just to organize them.
A magma is a set with a single closed binary operation (usually, but not always^{*}, addition).
 a + b = c
A semigroup is a magma where the addition is associative. See also Semigroupoid^{*}
 a + (b + c) = (a + b) + c
A monoid is a semigroup with an additive identity element.
 a + 0 = a
A group is a monoid with additive inverse elements.
 a + (a) = 0
An abelian group is a group where the addition is commutative.
 a + b = b + a
A pseudoring^{*} is an abelian group that also has a second closed, associative, binary operation (usually, but not always, multiplication).
 a * (b * c) = (a * b) * c
 And these two operations satisfy a distribution law.
 a(b + c) = ab + ac
A ring is a pseudoring that has a multiplicative identity
 a * 1 = a
A commutative ring^{*} is a ring where multiplication commutes, (e.g. integers)
 a * b = b * a
A field is a commutative ring where every element has a multiplicative inverse (and thus there is a multiplicative identity),
 a * (1/a) = 1
 The existence of a multiplicative inverse for every nonzero element automatically implies that there are no zero divisors^{*} in a field
 if ab=0 for some a≠0, then we must have b=0 (we call this having no zerodivisors).
The characteristic^{*} of ring R, denoted char(R), is the number of times one must add the multiplicative identity to get the additive identity.
The center of a group^{*} G consists of all those elements x in G such that xg = gx for all g in G. This is a normal subgroup^{*} of G.^{[43]} See also: Centralizer and normalizer^{*}.
All nonzero nilpotent^{*} elements are zero divisors^{*}.
 The square matrix^{w} is nilpotent
Set theory
"See also: Naive set theory^{*}, Zermelo–Fraenkel set theory^{*}, Set theory^{w}, Set notation^{*}, Setbuilder notation^{*}, Set^{w}, Algebra of sets^{*}, Field of sets^{*}, and Sigmaalgebra^{*}
is the empty set (the additive identity)
is the universe of all elements (the multiplicative identity)
means that a is a element^{w} (or member) of set A. In other words a is in A.
 means the set of all x's that are members of the set A such that x is not a member of the reals^{w}. Could also be written
A set^{w} does not allow multiple instances of an element.
 A multiset^{w} does allow multiple instances of an element.
A set can contain other sets.
means that A is a proper subset^{w} of B
 means that a is a subset^{w} of itself. But a set is not a proper subset^{w} of itself.
is the Union^{w} of the sets A and B. In other words,
is the Intersection^{w} of the sets A and B. In other words, All a's in B.
 Associative:
 Distributive:
 Commutative:
is the Set difference^{w} of A and B. In other words,
 or is the complement^{w} of A.
or is the Antiintersection^{w} of sets A and B which is the set of all objects that are a members of either A or B but not in both.
is the Cartesian product^{w} of A and B which is the set whose members are all possible ordered pairs^{w} (a, b) where a is a member of A and b is a member of B.
The Power set^{w} of a set A is the set whose members are all of the possible subsets of A.
A cover^{*} of a set X is a collection of sets whose union contains X as a subset.^{[44]}
A subset A of a topological space X is called dense^{*} (in X) if every point x in X either belongs to A or is arbitrarily "close" to a member of A.
 A subset A of X is meagre^{*} if it can be expressed as the union of countably many nowhere dense subsets of X.
Disjoint union^{*} of sets = {1, 2, 3} and = {1, 2, 3} can be computed by finding:
so
Let H be the subgroup of the integers (mZ, +) = ({..., −2m, −m, 0, m, 2m, ...}, +) where m is a positive integer.
 Then the cosets^{*} of H are the mZ + a = {..., −2m+a, −m+a, a, m+a, 2m+a, ...}.
 There are no more than m cosets, because mZ + m = m(Z + 1) = mZ.
 The coset (mZ + a, +) is the congruence class^{w} of a modulo m.^{[45]}
 Cosets are not usually themselves subgroups of G, only subsets.
means "there exists at least one"
means "there exists one and only one"
means "for all"
means "and" (not to be confused with wedge product^{w})
means "or" (not to be confused with antiwedge product^{w})
Probability
is the cardinality^{w} of A which is the number of elements in A. See measure^{w}.
is the unconditional probability^{w} that A will happen.
is the conditional probability^{w} that A will happen given that B has happened.
means that the probability that A or B will happen is the probability of A plus the probability of B minus the probability that both A and B will happen.
means that the probability that A and B will happen is the probability of "A and B given B" times the probability of B.
is Bayes' theorem^{*}
From Wikipedia:Base rate fallacy:
In a city of 1 million inhabitants let there be 100 terrorists and 999,900 nonterrorists. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. 99% of the time it behaves correctly. 1% of the time it behaves incorrectly, ringing when it should not and failing to ring when it should. Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T  B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the 'base rate fallacy' would infer that there is a 99% chance that the detected person is a terrorist. But that is not even close. For every 1 million faces scanned it will see 100 terrorists and will correctly ring 99 times. But it will also ring falsely 9,999 times. So the true probability is only 99/(9,999+99) or about 1%.
permutation^{w} relates to the act of arranging all the members of a set^{w} into some sequence^{w} or order^{*}.
The number of permutations of n distinct objects is n!^{w}.^{[46]}
 A derangement is a permutation of the elements of a set, such that no element appears in its original position.
In other words, derangement is a permutation that has no fixed points^{*}.
The number of derangements^{*} of a set of size n, usually written !n^{*}, is called the "derangement number" or "de Montmort number".^{[47]}
 The rencontres numbers^{*} are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements.^{[48]}
a combination^{w} is a selection of items from a collection, such that the order of selection does not matter.
For example, given three numbers, say 1, 2, and 3, there are three ways to choose two from this set of three: 12, 13, and 23.
More formally, a kcombination of a set^{w} S is a subset of k distinct elements of S.
If the set has n elements, the number of kcombinations is equal to the binomial coefficient^{w}
 Pronounced n choose k. The set of all kcombinations of a set S is often denoted by .
The central limit theorem (CLT) establishes that, in most situations, when independent random variables^{*} are added, their properly normalized sum tends toward a normal distribution^{w} (informally a "bell curve") even if the original variables themselves are not normally distributed.^{[49]}
In statistics^{w}, the standard deviation (SD, also represented by the Greek letter sigma σ^{w} or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion^{*} of a set of data values.^{[50]}
A low standard deviation indicates that the data points tend to be close to the mean^{w} (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values.^{[51]}
The hypergeometric distribution^{*} is a discrete probability distribution that describes the probability of k successes (random draws for which the object drawn has a specified feature) in n draws, without replacement, from a finite population of size N that contains exactly K objects with that feature, wherein each draw is either a success or a failure.
 In contrast, the binomial distribution^{*} describes the probability of k successes in n draws with replacement.^{[52]}
See also Dirichlet distribution^{*} and Rice distribution^{*}
Morphisms
 See also: Higher category theory^{w} and Multivalued function (misnomer)^{*}
Every function^{w} has exactly one output for every input.
If the function f(x) is invertible^{*} then its inverse function^{w} f^{−1}(x) has exactly one output for every input.
If it isn't invertible then it doesn't have an inverse function.
 f(x)=x/(x1) is an involution^{*} which is a function that is its own inverse function. f(f(x))=x
Injection^{w}  Invertible function Injection+Surjection Bijection^{w}  Surjection^{w} 
A morphism^{w} is exactly the same as a function but in Category theory^{w} every morphism has an inverse which is allowed to have more than one value or no value at all.
Categories^{*} consist of:
 Objects (usually Sets^{w})
 one source object (domain)
 one target object (codomain)
a morphism is represented by an arrow:
 is written where x is in X and y is in Y.
 is written where y is in Y and z is in Z.
The image^{*} of y is z.
The preimage^{*} (or fiber^{*}) of z is the set of all y whose image is z and is denoted
A picture is worth 1000 words 
A space Y is a covering space^{*} (a fiber bundle) of space Z if the map is locally homeomorphic^{w}.
 A covering space is a universal covering space^{*} if it is simply connected^{*}.
 The concept of a universal cover was first developed to define a natural domain for the analytic continuation^{*} of an analytic function^{w}.
 The general theory of analytic continuation and its generalizations are known as sheaf theory^{*}.
 The set of germs^{*} can be considered to be the analytic continuation of an analytic function.
A topological space is (path)connected^{*} if no part of it is disconnected.
A space is simply connected^{*} if there are no holes passing all the way through it (therefore any loop can be shrunk to a point)
 See Homology^{*}
Composition of morphisms:
 is written
 f is the pullback^{*} of g
 f is the lift^{*} of
 ? is the pushforward^{*} of ?
A homomorphism^{*} is a map from one set to another of the same type which preserves the operations of the algebraic structure:
 See Cauchy's functional equation^{*}
 A Functor^{*} is a homomorphism with a domain in one category and a codomain in another.
 A group homomorphism^{*} from (G, ∗) to (H, ·) is a function^{*} h : G → H such that
 for all u*v = c in G.
 For example
 Since log is a homomorphism that has an inverse that is also a homomorphism, log is an isomorphism^{*} of groups.
 See also group action^{*} and group orbit^{*}
A Multicategory^{*} has morphisms with more than one source object.
A Multilinear map^{*} :
has a corresponding Linear map^{w}::
Numerical methods
 See also: Explicit and implicit methods^{*}
One of the simplest problems is the evaluation of a function at a given point.
The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient.
For polynomials, a better approach is using the Horner scheme^{*}, since it reduces the necessary number of multiplications and additions.
Generally, it is important to estimate and control roundoff errors^{*} arising from the use of floating point^{*} arithmetic.
Interpolation^{*} solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation^{*} is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.
Regression^{*} is also similar, but it takes into account that the data is imprecise.
Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function.
The least squares^{*}method is one popular way to achieve this.
Much effort has been put in the development of methods for solving systems of linear equations^{*}.
 Standard direct methods, i.e., methods that use some matrix decomposition^{*}
 Gaussian elimination^{*}, LU decomposition^{*}, Cholesky decomposition^{*} for symmetric^{w} (or hermitian^{w}) and positivedefinite matrix^{w}, and QR decomposition^{*} for nonsquare matrices.
 Jacobi method^{*}, Gauss–Seidel method^{*}, successive overrelaxation^{*} and conjugate gradient method^{*} are usually preferred for large systems. General iterative methods can be developed using a matrix splitting^{*}.
Rootfinding algorithms^{*} are used to solve nonlinear equations.
 If the function is differentiable^{w} and the derivative is known, then Newton's method^{w} is a popular choice.
 Linearization^{*} is another technique for solving nonlinear equations.
Optimization^{w} problems ask for the point at which a given function is maximized (or minimized).
Often, the point also has to satisfy some constraints^{*}.
Differential equation^{w}: If you set up 100 fans to blow air from one end of the room to the other and then you drop a feather into the wind, what happens?
The feather will follow the air currents, which may be very complex.
One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again.
This is called the Euler method^{*} for solving an ordinary differential equation.
Information theory
From Wikipedia:Information theory:
Information theory studies the quantification, storage, and communication of information.
Communications over a channel—such as an ethernet cable—is the primary motivation of information theory.
From Wikipedia:Quantities of information:
Shannon derived a measure of information content called the selfinformation^{*} or "surprisal" of a message m:
where is the probability that message m is chosen from all possible choices in the message space . The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of bits^{*}.
Information is transferred from a source to a recipient only if the recipient of the information did not already have the information to begin with. Messages that convey information that is certain to happen and already known by the recipient contain no real information. Infrequently occurring messages contain more information than more frequently occurring messages. This fact is reflected in the above equation  a certain message, i.e. of probability 1, has an information measure of zero. In addition, a compound message of two (or more) unrelated (or mutually independent) messages would have a quantity of information that is the sum of the measures of information of each message individually. That fact is also reflected in the above equation, supporting the validity of its derivation.
An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity).
The more surprising a message is the more information it conveys. The message "LLLLLLLLLLLLLLLLLLLLLLLLL" conveys exactly as much information as the message "25 L's". The first message which is 25 bytes long can therefore be "compressed" into the second message which is only 6 bytes long.
Early computers
 See also: Time complexity^{*}
 Analog computer^{*}
 Abacus^{*}
 Napier's bones^{*}
 Slide rule^{*}
 Curta^{*}
 Lehmer sieve^{*}
 Z2 (computer)^{*}
Tactical thinking
Tactic X (Cooperate)  Tactic Y (Defect)  

Tactic A (Cooperate)  1, 1  5, 5 
Tactic B (Defect)  5, 5  5, 5 
 See also Wikipedia:Strategy (game theory)
 From Wikipedia:Game theory:
In the accompanying example there are two players; Player one (blue) chooses the row and player two (red) chooses the column.
Each player must choose without knowing what the other player has chosen.
The payoffs are provided in the interior.
The first number is the payoff received by Player 1; the second is the payoff for Player 2.
Tit for tat is a simple and highly effective tactic in game theory for the iterated prisoner's dilemma.
An agent using this tactic will first cooperate, then subsequently replicate an opponent's previous action.
If the opponent previously was cooperative, the agent is cooperative.
If not, the agent is not.^{[53]}
X  Y  

A  1,1  1,1 
B  1,1  1,1 
In zerosum games the sum of the payoffs is always zero (meaning that a player can only benefit at the expense of others).
Cooperation is impossible in a zerosum game.
John Forbes Nash proved that there is a Nash equilibrium (an optimum tactic) for every finite game.
In the zerosum game shown to the right the optimum tactic for player 1 is to randomly choose A or B with equal probability.
Strategic thinking differs from tactical thinking by taking into account how the short term goals and therefore optimum tactics change over time.
For example the opening, middlegame, and endgame of chess require radically different tactics.
Physics
 See also Galilean relativity^{*}
Something is known Beyond a reasonable doubt if any doubt that it is true is unreasonable. A doubt is reasonable if it is consistent with the laws of cause and effect.
In the four rules, as they came finally to stand in the 1726 edition, Newton effectively offers a methodology for handling unknown phenomena in nature and reaching towards explanations for them.
 Rule 1: We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
 Rule 2: Therefore to the same natural effects we must, as far as possible, assign the same causes.
 Rule 3: The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
 Rule 4: In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, not withstanding any contrary hypothesis that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.
 Newtonian mechanics^{w}, Lagrangian mechanics^{w}, and Hamiltonian mechanics^{w}
 The difference between the net kinetic energy and the net potential energy is called the “Lagrangian.”
 The action is defined as the time integral of the Lagrangian.
 The Hamiltonian is the sum of the kinetic and potential energies.
 Noether's theorem^{*} states that every differentiable symmetry of the action^{*} of a physical system has a corresponding conservation law^{*}.
 Special relativity^{*}, and General relativity^{*}
 Energy is conserved in relativity and proper velocity is proportional to momentum at all velocities.
Highly recommend:
 Thinking Physics Is Gedanken Physics by Lewis Carroll Epstein
Dimensional analysis
 See Natural units^{*}
Any physical law that accurately describes the real world must be independent of the units (e.g. km or mm) used to measure the physical variables.
Consequently, every possible commensurate equation for the physics of the system can be written in the form
The dimension, D_{n}, of a physical quantity can be expressed as a product of the basic physical dimensions length (L), mass (M), time (T), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J), each raised to a rational power.
Suppose we wish to calculate the range of a cannonball^{*} when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface.
The quantities of interest and their dimensions are then
 range as L_{x}
 as L_{x}/T
 as L_{y}/T
 g as L_{y}/T^{2}
The equation for the range may be written:
Therefore
and we may solve completely as , and .
Atoms
See also: Periodic table^{w}
The first pair of electrons fall into the ground shell. Once that shell is filled no more electrons can go into it. Any additional electrons go into higher shells.
The nucleus however works differently. The first few neutrons form the first shell. But any additional neutrons continue to fall into that same shell which continues to expand until there are 49 pairs of neutrons in that shell.
See also
Search Math wiki
External links
References
 ↑ Wikipedia:Generalization
 ↑ Wikipedia:Cartesian product
 ↑ Wikipedia:Tangent bundle
 ↑ Wikipedia:Lie group
 ↑ Wikipedia:Sesquilinear form
 ↑ Wikipedia:Tensor
 ↑ Wikipedia:Tensor (intrinsic definition)
 ↑ Wikipedia:Special unitary group
 ↑ Lawson, H. Blaine; Michelsohn, MarieLouise (1989). Spin Geometry. Princeton University Press^{w}. ISBN 9780691085425 page 14
 ↑ Friedrich, Thomas (2000), Dirac Operators in Riemannian Geometry, American Mathematical Society^{w}, ISBN 9780821820551 page 15
 ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedFlanders
 ↑ W. K. Clifford, "Preliminary sketch of biquaternions," Proc. London Math. Soc. Vol. 4 (1873) pp. 381395
 ↑ W. K. Clifford, Mathematical Papers, (ed. R. Tucker), London: Macmillan, 1882.
 ↑ Wikipedia:Rotor (mathematics)
 ↑ Roger Penrose (2005). The road to reality: a complete guide to the laws of our universe. Knopf. pp. 203–206.
 ↑ E. Meinrenken (2013), "The spin representation", Clifford Algebras and Lie Theory, Ergebnisse der Mathematik undihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics, 58, SpringerVerlag, doi:10.1007/9783642362163_3
 ↑ S.H. Dong (2011), "Chapter 2, Special Orthogonal Group SO(N)", Wave Equations in Higher Dimensions, Springer, pp. 13–38
 ↑ "Pauli matrices". Planetmath website. 28 March 2008. http://planetmath.org/PauliMatrices. Retrieved 28 May 2013.
 ↑ Oersted Medal Lecture David Hestenes "Reforming the Mathematical Language of Physics" (Am. J. Phys. 71 (2), February 2003, pp. 104–121) Online:http://geocalc.clas.asu.edu/html/OerstedReformingTheLanguage.html p26
 ↑ The words map or mapping, transformation, correspondence, and operator are often used synonymously. Template:Harvnb.
 ↑ Andrew Marx, Shortcut Algebra I: A Quick and Easy Way to Increase Your Algebra I Knowledge and Test Scores, Publisher Kaplan Publishing, 2007, Template:ISBN, 9781419552885, 288 pages, page 51
 ↑ Wikipedia:Multiplicity (mathematics)
 ↑ Wikipedia:Partial fraction decomposition
 ↑ Wikipedia:Basic hypergeometric series
 ↑ Wikipedia:qanalog
 ↑ ^{26.0} ^{26.1}
e^{x} = y = dy/dx
dx = dy/y = 1/y * dy
∫ (1/y)dy = ∫ dx = x = ln(y)
 ↑ Wikipedia:Product rule
 ↑ Wikipedia:Monotonic function
 ↑ Wikipedia:Generalized Fourier series
 ↑ Wikipedia:Spherical harmonics
 ↑ Wikipedia:Inverse Laplace transform
 ↑ http://mathworld.wolfram.com/PoissonKernel.html
 ↑ Wikipedia:Convolution theorem
 ↑ Wikipedia:RLC circuit
 ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedeFunda
 ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namededwards
 ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedcohen
 ↑ Wikipedia:Total derivative
 ↑ Wikipedia:Residue (complex analysis)
 ↑ Wikipedia:Potential theory
 ↑ Wikipedia:Harmonic conjugate
 ↑ Wikipedia:Calculus of variations
 ↑ Wikipedia:Center (algebra)
 ↑ Wikipedia:Cover (topology)
 ↑ Joshi p. 323
 ↑ Wikipedia:Permutation
 ↑ Wikipedia:derangement
 ↑ Wikipedia:rencontres numbers
 ↑ Wikipedia:Central limit theorem
 ↑ Bland, J.M.; Altman, D.G. (1996). "Statistics notes: measurement error". BMJ 312 (7047): 1654. doi:10.1136/bmj.312.7047.1654. PMC 2351401. PMID 8664723. //www.ncbi.nlm.nih.gov/pmc/articles/PMC2351401/.
 ↑ Wikipedia:standard deviation
 ↑ Wikipedia:Hypergeometric distribution
 ↑ Wikipedia:Tit for tat