 This article is a continuation of Introductory mathematics
It has been known since the time of Euclid^{w} that all of geometry can be derived from a handful of objects (points, lines...), a few actions on those objects, and a small number of axioms^{w}. Every field of science likewise can be reduced to a small set of objects, actions, and rules. Math itself is not a single field but rather a constellation of related fields. One way in which new fields are created is by the process of generalization.
A generalization is the formulation of general concepts from specific instances by abstracting common properties. Generalization is the process of identifying the parts of a whole, as belonging to the whole.^{[1]}
The purpose of this article is threefold:
 To give a broad general overview of the various fields and subfields of mathematics.
 To show how each field can be derived from first principles.
 To provide links to articles and webpages with more in depth information.
Foreword:
Mathematical notation^{w} can be extremely intimidating. Wikipedia is full of articles with page after page of indecipherable text. At first glance this article might appear to be the same. I want to assure the reader that every effort has been made to simplify everything as much as possible.
The following has been assembled from countless small pieces gathered from throughout the world wide web. I cant guarantee that there are no errors in it. Please report any errors or omissions on this articles talk page.
Numbers
Scalars
 See also: Peano axioms^{w}, ^{*}Hyperoperation, ^{*}Algebraic extension
The basis of all of mathematics is the ^{*}"Next" function. See Graph theory^{w}.
 Next(0)=1
 Next(1)=2
 Next(2)=3
 Next(3)=4
 Next(4)=5
We might express this by saying that One differs from nothing as two differs from one. This defines the Natural numbers^{w} (denoted $ \mathbb{N}_0 $). Natural numbers are those used for counting.
 These have the convenient property of being transitive^{w}. That means that if a<b and b<c then it follows that a<c. In fact they are totally ordered^{w}. See ^{*}Order theory.
Integers
Addition^{w} (See Tutorial:arithmetic) is defined as repeatedly calling the Next function, and its inverse is subtraction^{w}. But this leads to the ability to write equations like $ 13=x $ for which there is no answer among natural numbers. To provide an answer mathematicians generalize to the set of all integers^{w} (denoted $ \mathbb{Z} $ because zahlen means count in german) which includes negative integers.
 The Additive identity^{w} is zero because x + 0 = x.
 The absolute value or modulus of x is defined as $ x = \left\{ \begin{array}{rl} x, & \text{if } x \geq 0 \\ x, & \text{if } x < 0. \end{array}\right. $
 ^{*}Integers form a ring (denoted $ \mathcal O_\mathbb{Q} $) over the field of rational numbers. Ring^{w} is defined below.
 Z_{n} or $ \mathbb{Z}/n\mathbb{Z} $ is used to denote the set of ^{*}integers modulo n .
 ^{*}Modular arithmetic is essentially arithmetic in the quotient ring^{w} Z/nZ (which has n elements).
 Consider the ring of integers Z and the ideal of even numbers, denoted by 2Z. Then the quotient ring Z / 2Z has only two elements, zero for the even numbers and one for the odd numbers; applying the definition, [z] = z + 2Z := {z + 2y: 2y ∈ 2Z}, where 2Z is the ideal of even numbers. It is naturally isomorphic to the finite field with two elements, F_{2}. Intuitively: if you think of all the even numbers as 0, then every integer is either 0 (if it is even) or 1 (if it is odd and therefore differs from an even number by 1).
 An ^{*}ideal is a special subset of a ring. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3.
 The study of integers is called Number theory^{w}.
 $ a \mid b $ means a divides b.
 $ a \nmid b $ means a does not divide b.
 $ p^a \mid\mid n $ means p^{a} exactly divides n (i.e. p^{a} divides n but p^{a+1} does not).
 A prime number is a number that can only be divided by itself and one.
 If a, b, c, and d are primes and x=abc and y=c^{2}d then:
 Two integers a and b are said to be relatively prime, mutually prime, or coprime if the only positive integer that divides both of them is 1. Any prime number that divides one does not divide the other. This is equivalent to their greatest common divisor (gcd) being 1.
Rational numbers
Multiplication^{w} (See Tutorial:multiplication) is defined as repeated addition, and its inverse is division^{w}. But this leads to equations like $ 3/2=x $ for which there is no answer. The solution is to generalize to the set of rational numbers^{w} (denoted $ \mathbb{Q} $) which include fractions (See Tutorial:fractions). Any number which isnt rational is irrational^{w}. See also ^{*}padic number
 The set of all rational numbers except zero forms a ^{*}multiplicative group which is a set of invertible elements.
 Rational numbers form a ^{*}division algebra because every nonzero element has an inverse. The ability to find the inverse of every element turns out to be quite useful. A great deal of time and effort has been spent trying to find division algebras.
 The Multiplicative identity^{w} is one because x * 1 = x.
 Division by zero is undefined and undefinable^{w}. 1/0 exists nowhere on the complex plane^{w}. It does, however, exist on the Riemann sphere^{w} (often called the extended complex plane) where it is surprisingly well behaved. See also ^{*}Wheel theory and L'Hôpital's rule^{w}.
 $ \frac{1}{nothing}=everything $
 (Addition and multiplication are fast but division is slow ^{*}even for computers.)
Binary multiplication  

The binary numbers 101 and 110 are multiplied as follows: 1 0 1 (5 in decimal) × 1 1 0 (6 in decimal)  0 0 0 + 1 0 1 + 1 0 1  = 1 1 1 1 0 (30 in binary) Binary numbers can also be multiplied with bits after a ^{*}binary point: 1 0 1 . 1 0 1 (5.625 in decimal) × 1 1 0 . 0 1 (6.25 in decimal)  1 . 0 1 1 0 1 + 0 0 . 0 0 0 0 + 0 0 0 . 0 0 0 + 1 0 1 1 . 0 1 + 1 0 1 1 0 . 1  = 1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)
Our universe is tiny. Starting with only 2 people and doubling the population every 100 years will in only 27,000 years result in enough people to completely fill the observable universe. 
Irrational and complex numbers
Exponentiation^{w} (See Tutorial:exponents) is defined as repeated multiplication, and its inverses are roots^{w} and logarithms^{w}. But this leads to multiple equations with no solutions:
 Equations like $ \sqrt{2}=x. $ The solution is to generalize to the set of algebraic numbers^{w} (denoted $ \mathbb{A} $). (See also ^{*}algebraic integer and algebraically closed.) To see a proof that the square root of two is irrational see Square root of 2^{w}.
 Equations like $ 2^{\sqrt{2}}=x $ The solution (because x is transcendental^{w}) is to generalize to the set of Real numbers^{w} (denoted $ \mathbb{R} $).
 Equations like $ \sqrt{1}=x $ and $ e^x=1. $ The solution is to generalize to the set of complex numbers^{w} (denoted $ \mathbb{C} $) by defining i = sqrt(1). A single complex number $ z=a+bi $ consists of a real part a and an imaginary part bi (See Tutorial:complex numbers). Imaginary numbers^{w} (denoted $ \mathbb{I} $) often occur in equations involving change with respect to time. If friction is resistance to motion then imaginary friction would be resistance to change of motion wrt time. (In other words, imaginary friction would be mass.) In fact, in the equation for the Spacetime interval^{w} (given below), ^{*}time itself is an imaginary quantity.
 The Complex conjugate^{w} of the complex number $ z=a+bi $ is $ \overline{z}=abi. $ (Not to be confused with the dual^{w} of a vector.)
 Complex numbers form an ^{*}Algebra over a field (Kalgebra) because complex multiplication is ^{*}Bilinear.
 $ \sqrt{100} * \sqrt{100} = 10i * 10i = 100 \neq \sqrt{100 * 100} $
 The complex numbers are not ordered^{w}. However the absolute value^{w} or ^{*}modulus of a complex number is:
 $ z = a + ib = \sqrt{a^2+b^2} $
 A Gaussian integer a + bi is a Gaussian prime if and only if either:
 one of a, b is zero and absolute value of the other is a prime number of the form 4n + 3 (with n a nonnegative integer), or
 both are nonzero and a^{2} + b^{2} is a prime number (which will not be of the form 4n + 3).
 A Gaussian integer a + bi is a Gaussian prime if and only if either:
 There are n solutions of $ \sqrt[n]{z} $
 0^0 = 1. See Empty product^{w}.
 $ \log_b(x) = \frac{\log_a(x)}{\log_a(b)} $
Hypercomplex numbers
Complex numbers can be used to represent and perform rotations^{w} but only in 2 dimensions. Hypercomplex numbers^{w} like quaternions^{w} (denoted $ \mathbb{H} $), octonions^{w} (denoted $ \mathbb{O} $), and ^{*}sedenions (denoted $ \mathbb{S} $) are one way to generalize complex numbers to some (but not all) higher dimensions.
A quaternion can be thought of as a complex number whose coefficients are themselves complex numbers (hence a hypercomplex number).
 $ (a + b\boldsymbol{\hat{\imath}}) + (c + d\boldsymbol{\hat{\imath}})\boldsymbol{\hat{\jmath}} = a + b\boldsymbol{\hat{\imath}} + c\boldsymbol{\hat{\jmath}} + d\boldsymbol{\hat{\imath}\hat{\jmath}} = a + b\boldsymbol{\hat{\imath}} + c\boldsymbol{\hat{\jmath}} + d\boldsymbol{\hat{k}} $
Where
 $ \boldsymbol{\hat{\imath}}^2 = \boldsymbol{\hat{\jmath}}^2 = \boldsymbol{\hat{k}}^2 = \boldsymbol{\hat{\imath}} \boldsymbol{\hat{\jmath}} \boldsymbol{\hat{k}} = 1 $
and
 $ \begin{alignat}{2} \boldsymbol{\hat{\imath}}\boldsymbol{\hat{\jmath}} & = \boldsymbol{\hat{k}}, & \qquad \boldsymbol{\hat{\jmath}}\boldsymbol{\hat{\imath}} & = \boldsymbol{\hat{k}}, \\ \boldsymbol{\hat{\jmath}}\boldsymbol{\hat{k}} & = \boldsymbol{\hat{\imath}}, & \boldsymbol{\hat{k}}\boldsymbol{\hat{\jmath}} & = \boldsymbol{\hat{\imath}}, \\ \boldsymbol{\hat{k}}\boldsymbol{\hat{\imath}} & = \boldsymbol{\hat{\jmath}}, & \boldsymbol{\hat{\imath}}\boldsymbol{\hat{k}} & = \boldsymbol{\hat{\jmath}}. \end{alignat} $
Any real finitedimensional ^{*}division algebra over the reals must be:^{[2]}
 isomorphic to R or C if ^{*}unitary and commutative (equivalently: associative and commutative)
 isomorphic to the quaternions if noncommutative but associative
 isomorphic to the octonions if nonassociative but alternative.
The following is known about the dimension of a finitedimensional division algebra A over a field K:
 dim A = 1 if K is algebraically closed,
 dim A = 1, 2, 4 or 8 if K is ^{*}real closed, and
 If K is neither algebraically nor real closed, then there are infinitely many dimensions in which there exist division algebras over K.
^{*}Splitcomplex numbers (hyperbolic complex numbers) are similar to complex numbers except that i^{2} = +1.
Tetration
Tetration^{w} is defined as repeated exponentiation and its inverses are called superroot and superlogarithm.
 $ \begin{matrix} {}^{b}a & = & \underbrace{a^{a^{{}^{.\,^{.\,^{.\,^a}}}}}} & = & a\uparrow\uparrow b = & \underbrace{a\uparrow (a\uparrow(\dots\uparrow a))} & \\ & & b\mbox{ copies of }a & & & b\mbox{ copies of }a \end{matrix} $
Hyperreal numbers
When a quantity, like the charge of a single electron, becomes so small that it is insignificant we, quite justifiably, treat it as though it were zero. A quantity that can be treated as though it were zero, even though it very definitely is not, is called infinitesimal. If $ q $ is a finite $ ( q \cdot 1 ) $ amount of charge then using Leibniz's notation^{w} $ dq $ would be an infinitesimal $ ( q \cdot 1/\infty ) $ amount of charge. See Differential^{w}
Likewise when a quantity becomes so large that a regular finite quantity becomes insignificant then we call it infinite. We would say that the mass of the ocean is infinite $ ( M \cdot \infty ) $. But compared to the mass of the Milky Way galaxy our ocean is insignificant. So we would say the mass of the Galaxy is doubly infinite $ ( M \cdot \infty^2 ) $.
Infinity and the infinitesimal are called Hyperreal numbers^{w} (denoted $ {}^*\mathbb{R} $). Hyperreals behave, in every way, exactly like real numbers. For example, $ 2 \cdot \infty $ is exactly twice as big as $ \infty. $ In reality, the mass of the ocean is a real number so it is hardly surprising that it behaves like one. See ^{*}Epsilon numbers and ^{*}Big O notation
In ancient times infinity was called the "all".
Groups and rings
 Main articles: Algebraic structure^{w}, Abstract algebra^{w}, and ^{*}group theory
Addition and multiplication can be generalized in so many ways that mathematicians have created a whole system just to categorize them.
A ^{*}magma is a set with a single ^{*}closed binary operation (usually, ^{*}but not always, addition. See ^{*}Additive group).
 a + b = c
A ^{*}semigroup is a magma where the addition is associative. See also ^{*}Semigroupoid
 a + (b + c) = (a + b) + c
A ^{*}monoid is a semigroup with an additive identity element.
 a + 0 = a
A ^{*}group is a monoid with additive inverse elements.
 a + (a) = 0
An ^{*}abelian group is a group where the addition is commutative.
 a + b = b + a
A ^{*}pseudoring is an abelian group that also has a second closed, associative, binary operation (usually, but not always, multiplication).
 a * (b * c) = (a * b) * c
 And these two operations satisfy a distribution law.
 a(b + c) = ab + ac
A ^{*}ring is a pseudoring that has a multiplicative identity
 a * 1 = a
A ^{*}commutative ring is a ring where multiplication commutes, (e.g. ^{*}integers)
 a * b = b * a
A ^{*}field is a commutative ring where every element has a multiplicative inverse (and thus there is a multiplicative identity),
 a * (1/a) = 1
 The existence of a multiplicative inverse for every nonzero element automatically implies that there are no ^{*}zero divisors in a field
 if ab=0 for some a≠0, then we must have b=0 (we call this having no zerodivisors).
The ^{*}characteristic of ring R, denoted char(R), is the number of times one must add the ^{*}multiplicative identity to get the ^{*}additive identity.
The ^{*}center of a ^{*}noncommutative ring is the commutative subring of elements c such that cx = xc for every x. See also: ^{*}Centralizer and normalizer.
All nonzero ^{*}nilpotent elements are ^{*}zero divisors.
 The square matrix^{w} $ A = \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{pmatrix} $ is nilpotent
Intervals
 [2,5[ or [2,5) denotes the interval^{w} from 2 to 5, including 2 but excluding 5.
 [3..7] denotes all integers from 3 to 7.
 The set of all reals is unbounded at both ends.
 An open interval does not include its endpoints.
 ^{*}Compactness is a property that generalizes the notion of a subset being closed and bounded.
 The ^{*}unit interval is the closed interval [0,1]. It is often denoted I.
 The ^{*}unit square is a square whose sides have length 1.
 Often, "the" unit square refers specifically to the square in the Cartesian plane^{w} with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
 The ^{*}unit disk in the complex plane is the set of all complex numbers of absolute value less than one and is often denoted $ \mathbb {D} $
Vectors
 See also: ^{*}Algebraic geometry, ^{*}Algebraic variety, ^{*}Scheme, ^{*}Algebraic manifold, and Linear algebra^{w}
The one dimensional number line can be generalized to a multidimensional Cartesian coordinate system^{w} thereby creating multidimensional math (i.e. geometry^{w}). See also ^{*}Curvilinear coordinates
For sets A and B, the Cartesian product^{w} A × B is the set of all ordered pairs^{w} (a, b) where a ∈ A and b ∈ B.^{[3]}
 $ \mathbb{R}^3 $ is the Cartesian product $ \mathbb{R} \times \mathbb{R} \times \mathbb{R}. $
 $ \mathbb{R}^\infty = \mathbb{R}^\mathbb{N} $
 $ \mathbb{C}^3 $ is the Cartesian product^{w} $ \mathbb{C} \times \mathbb{C} \times \mathbb{C} $ (See ^{*}Complexification)
The ^{*}direct product generalizes the Cartesian product. 

(See also ^{*}Direct sum)

A vector space^{w} is a coordinate space^{w} with vector addition^{w} and scalar multiplication^{w} (multiplication of a vector and a scalar^{w} belonging to a field^{w}).
 If $ {\mathbf e_1} , {\mathbf e_2} , {\mathbf e_3} $ are orthogonal^{w} unit^{w} ^{*}basis vectors
 and $ {\mathbf u} , {\mathbf v} , {\mathbf x} $ are arbitrary vectors
 and are $ u_n , v_n , x_n $ scalars belonging to a field then we can (and usually do) write:
 $ \mathbf{u} = u_1 \mathbf{e_1} + u_2 \mathbf{e_2} + u_3 \mathbf{e_3} = \begin{bmatrix} u_1 & u_2 & u_3 \end{bmatrix} $
 $ \mathbf{v} = v_1 \mathbf{e_1} + v_2 \mathbf{e_2} + v_3 \mathbf{e_3} = \begin{bmatrix} v_1 & v_2 & v_3 \end{bmatrix} $
 $ \mathbf{x} = x_1 \mathbf{e_1} + x_2 \mathbf{e_2} + x_3 \mathbf{e_3} = \begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix} $
 See also: Linear independence^{w}
 A ^{*}module generalizes a vector space by allowing multiplication of a vector and a scalar belonging to a ring^{w}.
Coordinate systems define the length of vectors parallel to one of the axes but leave all other lengths undefined. This concept of "length" which only works for certain vectors is generalized as the "norm^{w}" which works for all vectors. The norm of vector $ \mathbf{v} $ is denoted $ \\mathbf{v}\. $ The double bars are used to avoid confusion with the absolute value of the function.
 Taxicab metric^{w} (called L^{1} norm. See ^{*}L^{p} space. Sometimes called Lebesgue spaces. See also Lebesgue measure^{w}.) A circle in L^{1} space is shaped like a diamond.
 $ \\mathbf{v}\ = v_1 + v_2 + v_3 $
 In Euclidean space^{w} the norm (called L^{2} norm) doesnt depend on the choice of coordinate system. As a result, rigid objects can rotate in Euclidean space. See proof of the Pythagorean theorem^{w} to the right. L^{2} is the only ^{*}Hilbert space among L^{p} spaces.
 $ \\mathbf{v}\ = \sqrt{v_1^2 + v_2^2 + v_3^2} $
 In Minkowski space^{w} (See ^{*}PseudoEuclidean space) the Spacetime interval^{w} is
 $ \s\ = \sqrt{x^2 + y^2 + z^2 + (cti)^2} $
 In ^{*}complex space the most common norm of an n dimensional vector is obtained by treating it as though it were a regular real valued 2n dimensional vector in Euclidean space
 $ \left\ \boldsymbol{z} \right\ = \sqrt{z_1 \bar z_1 + \cdots + z_n \bar z_n} $
 Infinity norm. (In this space a circle is shaped like a square.)
 $ \left\ \mathbf{x} \right\ _\infty := \max \left( \left x_1 \right , \ldots , \left x_n \right \right) . $
 A ^{*}Banach space is a ^{*}normed vector space that is also a complete metric space^{w} (there are no points missing from it).
Manifolds 

A manifold^{w} $ \mathbf{M} $ is a type of topological space^{w} in which each point has an infinitely small neighbourhood^{w} that is homeomorphic^{w} to Euclidean space^{w}. A manifold is locally, but not globally, Euclidean. A ^{*}Riemannian metric on a manifold allows distances and angles to be measured.
A ^{*}Lie group is a group that is also a finitedimensional real smooth manifold, in which the group operation is multiplication rather than addition.^{[5]} ^{*}n×n invertible matrices (See below) are a Lie group.

Multiplication of vectors
Multiplication can be generalized to allow for multiplication of vectors in 3 different ways:
Dot product
Dot product^{w} (a Scalar^{w}): $ \mathbf{u} \cdot \mathbf{v} = \ \mathbf{u} \\ \ \mathbf{v}\ \cos(\theta) = u_1 v_1 + u_2 v_2 + u_3 v_3 $
 $ \mathbf{u}\cdot\mathbf{v} = \begin{bmatrix}u_1 \mathbf{e_1} \\ u_2 \mathbf{e_2} \\ u_3 \mathbf{e_3} \end{bmatrix} \begin{bmatrix}v_1 \mathbf{e_1} & v_2 \mathbf{e_2} & v_3 \mathbf{e_3} \end{bmatrix} = \begin{bmatrix}u_1 v_1 + u_2 v_2 + u_3 v_3 \end{bmatrix} $
 Strangely, only parallel components multiply.
 The dot product can be generalized to the bilinear form^{w} $ \beta(\mathbf{u,v}) = u^T Av = scalar $ where A is an (0,2) tensor. (For the dot product in Euclidean space A is the identity tensor. But in Minkowski space A is the ^{*}Minkowski metric).
 Two vectors are orthogonal if $ \beta(\mathbf{u,v}) = 0. $
 A bilinear form is symmetric if $ \beta(\mathbf{u,v}) = \beta(\mathbf{v,u}) $
 Its associated ^{*}quadratic form is $ Q(\mathbf{x}) = \beta(\mathbf{x,x}). $
 In Euclidean space $ \\mathbf{v}\^2 = \mathbf{v}\cdot\mathbf{v}= Q(\mathbf{x}). $
 A nondegenerate bilinear form is one for which the associated matrix is invertible (its determinate is not zero)
 $ \beta(\mathbf{u,v})=0 \, $ for all v implies that u = 0.
 The inner product^{w} is a generalization of the dot product to complex vector space.
 $ \langle u,v\rangle=\overline{\langle v,u\rangle}=u\cdot \bar{v}=\langle v \mid u\rangle $ (See ^{*}Bra–ket notation.)
 The inner product can be generalized to a sesquilinear form^{w}
 A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × V → C such that^{[6]} $ h(w,z) = \overline{h(z, w)}. $
 A is a ^{*}Hermitian operator iff^{w} $ \langle v \mid A u\rangle = \langle A v \mid u\rangle. $ Often written as $ \langle v \mid A \mid u\rangle. $
 The curl operator, $ \nabla\times $ is Hermitian.
 A ^{*}Hilbert space is an inner product space^{w} that is also a Complete metric space^{w}.
 A Function can be treated as a vector in infinite dimensional space with one dimension, and therefore one component, for each point. The length of each component being the value of the function at that point. As such, they have an inner product. (See The Dot, oops, INNER Product) The inner product of $ f $ and $ g $ with the domain between $ a $ and $ b $ is
 $ \langle f,g\rangle=\int\limits_a^b f\cdot\overline{g}\,dx $
 If this is equal to 0, the functions are said to be orthogonal on the interval. Unlike with vectors, this has no geometric significance but this definition is useful in ^{*}Fourier analysis. See below.
Outer product
Outer product^{w} (a tensor^{w} called a dyadic^{w}):$ \mathbf{u} \otimes \mathbf{v}. $
 As one would expect, every component of one vector multipies with every component of the other vector.

 For complex vectors, it is customary to use the conjugate transpose of v (denoted v^{H} or v*):^{[7]}
 $ \mathbf{u} \otimes \mathbf{v} = \mathbf{u} \mathbf{v}^\mathrm{H} = \mathbf{u} \mathbf{v}^*\ $
 Taking the dot product of u⊗v and any vector x (See Visualization of Tensor multiplication^{w}) causes the components of x not pointing in the direction of v to become zero. What remains is then rotated from v to u. Therefore an outer product rotates one component of a vector and causes all other components to become zero.
 $ \mathbf{e}_1 \otimes \mathbf{e}_2 \cdot \mathbf{e}_2 = \mathbf{e}_1 $
 To rotate a vector with 2 components you need the sum of at least 2 outer products (a bivector). But this is still not perfect. Any 3rd component not in the plane of rotation will become zero.
 A true 3 dimensional rotation matrix can be constructed by summing three outer products. The first two sum to form a bivector. The third one rotates the axis of rotation zero degrees but is necessary to prevent that dimension from being squashed to nothing. $ \mathbf{e}_1 \otimes \mathbf{e}_2  \mathbf{e}_2 \otimes \mathbf{e}_1 + \mathbf{e}_3 \otimes \mathbf{e}_3 $
 The Tensor product^{w} generalizes the outer product^{w}.
Geometric product
The geometric product^{w} will be explained in detail below.
Wedge product
Wedge product^{w} (a simple bivector^{w}): $ \mathbf{u} \wedge \mathbf{v} = \mathbf{u} \otimes \mathbf{v}  \mathbf{v} \otimes \mathbf{u} = [\overline{\mathbf{u}}, \overline{\mathbf{v}}] $
 The wedge product of 2 vectors is equal to the ^{*}geometric product minus the inner product as will be explained in detail below.
 The wedge product is also called the exterior product^{w} (sometimes mistakenly called the outer product).
 The term "exterior" comes from the exterior product of two vectors not being a vector.
 Just as a vector has length and direction so a bivector has an area and an orientation.
 In three dimensions $ \mathbf{u} \wedge \mathbf{v} $ is the dual^{w} of the cross product^{w} which is a pseudovector^{w}. $ \overline{\mathbf{u} \wedge \mathbf{v}} = \mathbf{u} \times \mathbf{v} $
$ \mathbf{a \wedge b \wedge c = a \otimes b \otimes c  a \otimes c \otimes b + c \otimes a \otimes b  c \otimes b \otimes a + b \otimes c \otimes a  b \otimes a \otimes c} $
 The triple product^{w} a∧b∧c is a trivector which is a 3rd degree tensor.
 In 3 dimensions a trivector is a pseudoscalar so in 3 dimensions every trivector can be represented as a scalar times the unit trivector. See LeviCivita symbol^{w}
 $ \mathbf{a}\cdot(\mathbf{b}\times \mathbf{c}) \cdot \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3 $
 The dual^{w} of vector a is bivector ā:
 $ \overline{\mathbf{a}} \quad\stackrel{\rm def}{=} \quad\begin{bmatrix}\,\,0&\!a_3&\,\,\,a_2\\\,\,\,a_3&0&\!a_1\\\!a_2&\,\,a_1&\,\,0\end{bmatrix} $
Covectors
The Mississippi flows at about 3 km per hour. Km per hour has both direction and magnitude and is a vector.
The Mississippi flows downhill about one foot per km. Feet per km has direction and magnitude but is not a vector. Its a covector.
The difference between a vector and a covector becomes apparent when doing a changing units. If we measured in meters instead of km then 3 km per hour become 3000 meters per hour. The numerical value increases. Vectors are therefore contravariant.
But 1 Foot per km becomes 0.001 foot per meter. The numerical value decreases. Covectors are therefore covariant.
Tensors are more complicated. They can be part contravariant and part covariant.
A (1,1) Tensor is one part contravariant and one part covariant. It is totally unaffected by a change of units. It is these that we will study in the next section.
Tensors
 See also: ^{*}Matrix norm and ^{*}Tensor contraction
 External links: Review of Linear Algebra and HighOrder Tensors
Just as a vector is a sum of unit vectors multiplied by constants so a tensor is a sum of unit dyadics ($ e_1 \otimes e_2 $) multiplied by constants. Each dyadic is associated with a certain plane segment having a certain orientation and magnitude. (But a dyadic is not the same thing as a bivector^{w}.)
A simple tensor is a tensor that can be written as a product of tensors of the form $ T=a\otimes b\otimes\cdots\otimes d. $ (See Outer Product above.) The rank of a tensor T is the minimum number of simple tensors that sum to T.^{[8]} A bivector^{w} is a tensor of rank 2.
The order or degree of the tensor is the dimension of the tensor which is the total number of indices required to identify each component uniquely.^{[9]} A vector is a 1storder tensor.
Complex numbers can be used to represent and perform rotations^{w} but only in 2 dimensions.
Tensors^{w}, on the other hand, can be used in any number of dimensions to represent and perform rotations and other linear transformations^{w}. See the image to the right.
 Any affine transformation^{w} is equivalent to a linear transformation followed by a translation^{w} of the origin. (The origin^{w} is always a fixed point for any linear transformation.) "Translation" is just a fancy word for "move".
Multiplying a tensor and a vector results in a new vector that can not only have a different magnitude but can even point in a completely different direction:
 $ \begin{bmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} a_1x+a_2y+a_3z\\ b_1x+b_2y+b_3z\\ c_1x+c_2y+c_3z\\ \end{bmatrix} $
Some special cases:
 $ \begin{bmatrix} {\color{green}a_1} & a_2 & a_3 \\ {\color{green}b_1} & b_2 & b_3 \\ {\color{green}c_1} & c_2 & c_3 \\ \end{bmatrix} \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix} = \begin{bmatrix} a_1\\ b_1\\ c_1\\ \end{bmatrix} $
 $ \begin{bmatrix} a_1 & {\color{green}a_2} & a_3 \\ b_1 & {\color{green}b_2} & b_3 \\ c_1 & {\color{green}c_2} & c_3 \\ \end{bmatrix} \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix} = \begin{bmatrix} a_2\\ b_2\\ c_2\\ \end{bmatrix} $
 $ \begin{bmatrix} a_1 & a_2 & {\color{green}a_3} \\ b_1 & b_2 & {\color{green}b_3} \\ c_1 & c_2 & {\color{green}c_3} \\ \end{bmatrix} \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix} = \begin{bmatrix} a_3\\ b_3\\ c_3\\ \end{bmatrix} $
One can also multiply a tensor with another tensor. Each column of the second tensor is transformed exactly as a vector would be.
 $ \begin{bmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} = \begin{bmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \\ \end{bmatrix} $
And we can also switch things around using a ^{*}Permutation matrix. (See also ^{*}Permutation group):
 $ \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} x\\ z\\ y\\ \end{bmatrix} $
 $ \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} z\\ y\\ x\\ \end{bmatrix} $
 $ \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} y\\ x\\ z\\ \end{bmatrix} $
Matrices do not in general commute:
 $ \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} 0 & a \\ b & 0 \end{pmatrix} $
 $ \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} = \begin{pmatrix} 0 & b \\ a & 0 \end{pmatrix} $
The Determinant^{w} of a matrix is the area or volume spanned by its column vectors and is frequently useful.
 $ A=\begin{vmatrix}a&b\\c&d\end{vmatrix}=adbc $
A Unitary matrix^{w} is a complex square matrix whose rows (or columns) form an ^{*}orthonormal basis of $ \mathbb{C}^n $ with respect to the usual inner product.

An orthogonal matrix is a real unitary matrix. Its columns and rows are orthogonal unit vectors (i.e., ^{*}orthonormal vectors). A permutation matrix is an orthogonal matrix.

A Hermitian matrix^{w} is a complex square matrix that is equal to its own conjugate transpose^{w}. $ A = \overline {A^\text{T}} = A^\dagger = A^\text{H} $ The diagonal elements must be real.

A symmetric matrix^{w} is a real Hermitian matrix. It is equal to its transpose^{w}. $ A = A^\mathrm{T}. $

A ^{*}SkewHermitian matrix is a complex square matrix whose conjugate transpose is its negative. $ A^\dagger = A,\; $ The diagonal elements must be imaginary.

A Skewsymmetric matrix^{w} is a real SkewHermitian matrix. Its transpose equals its negative. A^{T} = −A The diagonal elements must be zero.

All unitary^{w}, Hermitian^{w}, and ^{*}skewHermitian matrices are normal. 
All orthogonal^{w}, symmetric^{w}, and skewsymmetric^{w} matrices are normal. 
A matrix is normal if and only if it is ^{*}diagonalizable.
A diagonal matrix^{w}:
 $ \begin{bmatrix} 6 & 0 & 0 \\ 0 & 7i & 0 \\ 0 & 0 & 1+9i \end{bmatrix} $
The determinate of a diagonal matrix:
 $ \begin{vmatrix} a&0&0\\ 0&b&0\\ 0&0&c \end{vmatrix} =abc $
A superdiagonal entry is one that is directly above and to the right of the main diagonal. A subdiagonal entry is one that is directly below and to the left of the main diagonal. The eigenvalues of diag(λ_{1}, ..., λ_{n}) are λ_{1}, ..., λ_{n} with associated eigenvectors of e_{1}, ..., e_{n}.
A ^{*}spectral theorem is a result about when a matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations.
Matrices do have zero divisors:
$ A= \begin{pmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\0&0&0&0\end{pmatrix}, \quad A^2= \begin{pmatrix}0&0&1&0\\0&0&0&1\\0&0&0&0\\0&0&0&0\end{pmatrix}, \quad A^3= \begin{pmatrix}0&0&0&1\\0&0&0&0\\0&0&0&0\\0&0&0&0\end{pmatrix}, \quad A^4=0 $
From Wikipedia:Matrix similarity
In linear algebra, two nbyn matrices A and B are called similar if
 $ B = P^{1} A P $
for some invertible nbyn matrix P. Similar matrices represent the same ^{*}linear operator under two (possibly) different ^{*}bases, with P being the ^{*}change of basis matrix.
A transformation A ↦ P^{−1}AP is called a similarity transformation or conjugation of the matrix A. In the ^{*}general linear group, similarity is therefore the same as ^{*}conjugacy, and similar matrices are also called conjugate; however in a given subgroup H of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that P be chosen to lie in H.
Decomposition of tensors 

Every tensor of degree 2 can be decomposed into a symmetric and an antisymmetric tensor
The Outer product (tensor product) of a vector with itself is a symmetric tensor:
The wedge product of 2 vectors is antisymmetric:
Any $ n\times n $ matrix X with complex entries can be expressed as
where
This is the ^{*}Jordan–Chevalley decomposition. 
Block matrix 

The matrix
can be partitioned into 4 2×2 blocks
The partitioned matrix can then be written as
the matrix product
can be formed blockwise, yielding $ \mathbf{C} $ as an $ (m\times n) $ matrix with $ q $ row partitions and $ r $ column partitions. The matrices in the resulting matrix $ \mathbf{C} $ are calculated by multiplying:
Or, using the ^{*}Einstein notation that implicitly sums over repeated indices:

Linear groups
A square matrix^{w} of order n is an nbyn matrix. Any two square matrices of the same order can be added and multiplied. A matrix is invertible if and only if its determinant is nonzero.
GL_{n}(F) or GL(n, F), or simply GL(n) is the ^{*}Lie group of n×n invertible matrices with entries from the field F. The group GL(n, F) and its subgroups are often called linear groups or matrix groups.
 SL(n, F) or SL_{n}(F), is the ^{*}subgroup of GL(n, F) consisting of matrices with a determinant^{w} of 1.
 U(n), the Unitary group of degree n is the group^{w} of n × n unitary matrices^{w}. (More general unitary matrices may have complex determinants with absolute value 1, rather than real 1 in the special case.) The group operation is matrix multiplication^{w}.^{[10]}
 SU(n), the special unitary group of degree n, is the ^{*}Lie group of n×n unitary matrices^{w} with determinant^{w} 1.
Symmetry groups
^{*}Affine group
 ^{*}Poincaré group: boosts, rotations, translations
 ^{*}Lorentz group: boosts, rotations
 The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of noncolinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.)
Aff(n,K): the affine group or general affine group of any affine space over a field K is the group of all invertible affine transformations from the space into itself.
 E(n): rotations, reflections, and translations.
 O(n): rotations, reflections
 SO(n): rotations
 so(3) is the Lie algebra of SO(3) and consists of all skewsymmetric^{w} 3 × 3 matrices.
Clifford group: The set of invertible elements x such that for all v in V $ x v \alpha(x)^{1}\in V . $ The ^{*}spinor norm Q is defined on the Clifford group by $ Q(x) = x^\mathrm{t}x. $
 Pin_{V}(K): The subgroup of elements of spinor norm 1. Maps 2to1 to the orthogonal group
 Spin_{V}(K): The subgroup of elements of Dickson invariant 0 in Pin_{V}(K). When the characteristic is not 2, these are the elements of determinant 1. Maps 2to1 to the special orthogonal group. Elements of the spin group act as linear transformations on the space of spinors
Rotations
In 4 spatial dimensions a rigid object can ^{*}rotate in 2 different ways simultaneously.
 See also: ^{*}Hypersphere of rotations, ^{*}Rotation group SO(3), ^{*}Special unitary group, ^{*}Plate trick, ^{*}Spin representation, ^{*}Spin group, ^{*}Pin group, ^{*}Spinor, Clifford algebra^{w}, ^{*}Indefinite orthogonal group, ^{*}Root system, Bivectors^{w}, Curl^{w}
Consider the solid ball in R^{3} of radius π. For every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The two rotations through π and through −π are the same. So we ^{*}identify (or "glue together") ^{*}antipodal points on the surface of the ball.
The ball with antipodal surface points identified is a ^{*}smooth manifold, and this manifold is ^{*}diffeomorphic to the rotation group. It is also diffeomorphic to the ^{*}real 3dimensional projective space RP^{3}, so the latter can also serve as a topological model for the rotation group.
These identifications illustrate that SO(3) is ^{*}connected but not ^{*}simply connected. As to the latter, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". (In other words one full rotation is not equivalent to doing nothing.)
Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The ^{*}Balinese plate trick and similar tricks demonstrate this practically.
The same argument can be performed in general, and it shows that the ^{*}fundamental group of SO(3) is cyclic group^{w} of order 2. In physics applications, the nontriviality of the fundamental group allows for the existence of objects known as ^{*}spinors, and is an important tool in the development of the ^{*}spinstatistics theorem.
Spin group 

The ^{*}universal cover of SO(3) is a ^{*}Lie group called ^{*}Spin(3). The group Spin(3) is isomorphic to the ^{*}special unitary group SU(2); it is also diffeomorphic to the unit ^{*}3sphere S^{3} and can be understood as the group of ^{*}versors (quaternions^{w} with absolute value^{w} 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in ^{*}quaternions and spatial rotation. The map from S^{3} onto SO(3) that identifies antipodal points of S^{3} is a ^{*}surjective ^{*}homomorphism of Lie groups, with ^{*}kernel {±1}. Topologically, this map is a twotoone ^{*}covering map. (See the ^{*}plate trick.)
The spin group Spin(n)^{[11]}^{[12]} is the ^{*}double cover of the ^{*}special orthogonal group SO(n) = SO(n, R), such that there exists a ^{*}short exact sequence of ^{*}Lie groups (with n ≠ 2)
As a Lie group, Spin(n) therefore shares its ^{*}dimension, n(n − 1)/2, and its ^{*}Lie algebra with the special orthogonal group. For n > 2, Spin(n) is ^{*}simply connected and so coincides with the ^{*}universal cover of ^{*}SO(n). The nontrivial element of the kernel is denoted −1, which should not be confused with the orthogonal transform of ^{*}reflection through the origin, generally denoted −I . Spin(n) can be constructed as a ^{*}subgroup of the invertible elements in the Clifford algebra^{w} Cl(n). A distinct article discusses the ^{*}spin representations. 
Matrix representations
 See also: ^{*}Group representation, ^{*}Presentation of a group, ^{*}Abstract algebra
Real numbers
If a vector is multiplied with the the ^{*}identity matrix $ I $ then the vector is completely unchanged:
 $ I \cdot v = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = 1 \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} $
And if $ A=a \cdot I $ then
 $ A \cdot v = \begin{bmatrix} a & 0 & 0 \\ 0 & a & 0 \\ 0 & 0 & a \\ \end{bmatrix} \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = a \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} a\cdot x\\ a \cdot y\\ a\cdot z\\ \end{bmatrix} $
Therefore $ A=a \cdot I $ can be thought of as the matrix form of the scalar a. The scalar matrices are the center of the algebra of matrices.
 $ A + B = \begin{bmatrix} a & 0 & 0 \\ 0 & a & 0 \\ 0 & 0 & a \\ \end{bmatrix} + \begin{bmatrix} b & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & b \\ \end{bmatrix} = \begin{bmatrix} a + b & 0 & 0 \\ 0 & a + b & 0 \\ 0 & 0 & a + b \\ \end{bmatrix} $
 $ A \cdot B = \begin{bmatrix} a & 0 & 0 \\ 0 & a & 0 \\ 0 & 0 & a \\ \end{bmatrix} \begin{bmatrix} b & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & b \\ \end{bmatrix} = \begin{bmatrix} ab & 0 & 0 \\ 0 & ab & 0 \\ 0 & 0 & ab \\ \end{bmatrix} $
 $ 1/A = \begin{bmatrix} 1/a & 0 & 0 \\ 0 & 1/a & 0 \\ 0 & 0 & 1/a \\ \end{bmatrix} $
 $ A = \begin{vmatrix} a & 0 & 0 \\ 0 & a & 0 \\ 0 & 0 & a \end{vmatrix} = a^3 $
 $ A^B = \begin{bmatrix} a^b & 0 & 0 \\ 0 & a^b & 0 \\ 0 & 0 & a^b \\ \end{bmatrix} $
 $ e^A= \begin{bmatrix} e^a & 0 & 0 \\ 0 & e^a & 0 \\ 0 & 0 & e^a \\ \end{bmatrix} $.
 $ \ln A= \begin{bmatrix} \ln a & 0 & 0 \\ 0 & \ln a & 0 \\ 0 & 0 & \ln a \\ \end{bmatrix} $.
(Note: Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.)
Complex numbers
Complex numbers can also be written in matrix form^{w} in such a way that complex multiplication corresponds perfectly to matrix multiplication:
 $ \begin{align} (a+ib)(c+id) &= \begin{bmatrix} a & b \\ b & a \end{bmatrix} \begin{bmatrix} c & d \\ d & c \end{bmatrix} \\ &= \begin{bmatrix} acbd & ad+bc \\ (ad+bc) & acbd \end{bmatrix} \end{align} $
 $ \begin{align} (i)(i) &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \\ &= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \\ &= I \end{align} $
The absolute value of a complex number is defined by the Euclidean distance of its corresponding point in the complex plane from the origin computed using the Pythagorean theorem.
 $ z^2 = \begin{vmatrix} a & b \\ b & a \end{vmatrix} = a^2 + b^2. $
Quaternions
There are at least two ways of representing quaternions as matrices^{w} in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplication^{w}.
Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as
 $ \begin{bmatrix} z_1 & z_2 \\ \overline{z_2} & \overline{z_1} \end{bmatrix} = \begin{bmatrix} a+bi & c+di \\ (cdi) & abi \end{bmatrix}= a \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} + b \begin{bmatrix} i & 0 \\ 0 & i \\ \end{bmatrix} + c \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + d \begin{bmatrix} 0 & i \\ i & 0 \\ \end{bmatrix}. $
Multiplying any two Pauli matrices always yields a quaternion unit matrix. See Isomorphism to quaternions below.
By replacing each 0, 1, and i with its 2 × 2 matrix representation that same quaternion can be written as a 4 × 4 real (^{*}block) matrix:
$ \begin{bmatrix} a & b & c & d \\ b & a & d & c \\ c & d & a & b \\ d & c & b & a \end{bmatrix}= a \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} + b \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} + c \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix} + d \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}. $
Therefore:
 $ \begin{align} b \cdot i &= b \cdot (e_1 \wedge e_2 + e_4 \wedge e_3)\\ c \cdot j &= c \cdot (e_1 \wedge e_3 + e_2 \wedge e_4)\\ d \cdot k &= d \cdot (e_1 \wedge e_4 + e_3 \wedge e_2)\\ \end{align} $
However, the representation of quaternions in M(4,ℝ) is not unique. In fact, there exist 48 distinct representations of this form. Each 4x4 matrix representation of quaternions corresponds to a multiplication table of unit quaternions. See Wikipedia:Quaternion#Matrix_representations.
The obvious way of representing quaternions with 3 × 3 real matrices does not work because:
 $ b \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \cdot c \begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix} \neq d \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & {\color{red}1} & 0 \end{bmatrix}. $
Vectors
Euclidean
 See also: ^{*}Splitcomplex numbers
Unfortunately the matrix representation of a vector is not so obvious. First we must decide what properties the matrix should have. To see consider the square (^{*}quadratic form) of a single vector:
 $ Q(c) = c^2 = \langle c , c \rangle = (a\mathbf{e_1} + b\mathbf{e_2})^2 $
 $ Q(c) = aa\mathbf{e_1}\mathbf{e_1} + bb\mathbf{e_2}\mathbf{e_2} + ab\mathbf{e_1}\mathbf{e_2} + ba\mathbf{e_2}\mathbf{e_1} $
 $ Q(c) = a^2\mathbf{e_1}\mathbf{e_1} + b^2\mathbf{e_2}\mathbf{e_2} + ab(\mathbf{e_1}\mathbf{e_2} + \mathbf{e_2}\mathbf{e_1}) $
From the Pythagorean theorem we know that:
 $ c^2 = a^2 + b^2 + ab(0) = Scalar $
So we know that
 $ e_1^2 = e_2^2 = 1 $
 $ e_1 e_2 = e_2 e_1 $
This particular Clifford algebra is known as Cl_{2,0}. The subscript 2 indicates that the 2 basis vectors are square roots of +1. See ^{*}Metric signature. If we had used $ c^2 = a^2 b^2 $ then the result would have been Cl_{0,2}.
The set of 3 matrices in 3 dimensions that have these properties are called ^{*}Pauli matrices. The algebra generated by the three Pauli matrices is isomorphic to the Clifford algebra of ℝ^{3}.
The Pauli matrices are a set of three 2 × 2 complex^{w} matrices^{w} which are Hermitian^{w} and unitary^{w}.^{[13]} They are
 $ \begin{align} \sigma_1 = \sigma_x &= \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \\ \sigma_2 = \sigma_y &= \begin{pmatrix} 0&i\\ i&0 \end{pmatrix} \\ \sigma_3 = \sigma_z &= \begin{pmatrix} 1&0\\ 0&1 \end{pmatrix} \,. \end{align} $
Squaring a Pauli matrix results in a "scalar":
 $ \sigma_1^2 = \sigma_2^2 = \sigma_3^2 = \begin{pmatrix} 1&0\\ 0&1 \end{pmatrix} = \sigma_0 = I $
Do NOT confuse this scalar with the vectors above. It may look similar to the Pauli matrices but it is not the matrix representation of a vector. It is the matrix representation of a scalar. Scalars are totally different from vectors and the matrix representations of scalars are totally different from the matrix representations of vectors. They are NOT the same.
Multiplication is ^{*}anticommutative:
 $ \sigma_1 \sigma_2 =  \sigma_2 \sigma_1 $
 $ \sigma_2 \sigma_3 =  \sigma_3 \sigma_2 $
 $ \sigma_3 \sigma_1 =  \sigma_1 \sigma_3 $
And
 $ \sigma_1 \sigma_2 \sigma_3 = \begin{pmatrix} i&0\\0&i\end{pmatrix} = i $
Exponential of a Pauli vector which is analogous to Euler's formula, extended to quaternions:
 $ e^{i a(\hat{n} \cdot \vec{\sigma})} = I\cos{a} + i (\hat{n} \cdot \vec{\sigma}) \sin{a} $
commutation^{w} relations:
 $ \begin{align} \left[\sigma_1, \sigma_2\right] &= 2i\sigma_3 \, \\ \left[\sigma_2, \sigma_3\right] &= 2i\sigma_1 \, \\ \left[\sigma_3, \sigma_1\right] &= 2i\sigma_2 \, \\ \left[\sigma_1, \sigma_1\right] &= 0\, \\ \end{align} $
^{*}anticommutation relations:
 $ \begin{align} \left\{\sigma_1, \sigma_1\right\} &= 2I\, \\ \left\{\sigma_1, \sigma_2\right\} &= 0\,.\\ \end{align} $
Adding the commutator ($ ab  ba $) to the anticommutator ($ ab + ba $) gives the general formula for multiplying any 2 arbitrary "vectors" (or rather their matrix representations):
 $ (\vec{a} \cdot \vec{\sigma})(\vec{b} \cdot \vec{\sigma}) = (\vec{a} \cdot \vec{b}) \, I + i ( \vec{a} \times \vec{b} )\cdot \vec{\sigma} $
If $ i $ is identified with the pseudoscalar $ \sigma_x \sigma_y \sigma_z $ then the right hand side becomes $ a \cdot b + a \wedge b $ which is also the definition for the geometric product^{w} of two vectors in geometric algebra^{w} (Clifford algebra^{w}). The geometric product of two vectors is a multivector^{w}.
For any 2 arbitrary vectors:
 $ fd = force*distance $
 $ fd = (f_1\mathbf{e_1} + f_2\mathbf{e_2})(d_1\mathbf{e_1} + d_2\mathbf{e_2}) $
 $ fd = f_1d_1\mathbf{e_1}\mathbf{e_1} + f_2d_2\mathbf{e_2}\mathbf{e_2} + f_1d_2\mathbf{e_1}\mathbf{e_2} + f_2d_1\mathbf{e_2}\mathbf{e_1} $
 $ fd = f_1d_1\mathbf{e_1}\mathbf{e_1} + f_2d_2\mathbf{e_2}\mathbf{e_2} + f_1d_2\mathbf{e_1}\mathbf{e_2}  f_2d_1\mathbf{e_1}\mathbf{e_2} $
 $ fd = f_1d_1\mathbf{e_1}\mathbf{e_1} + f_2d_2\mathbf{e_2}\mathbf{e_2} + (f_1d_2  f_2d_1)\mathbf{e_1}\mathbf{e_2} $
Applying the rules of Clifford algebra we get:
 $ fd = f_1d_1 + f_2d_2 + (f_1d_2  f_2d_1)\mathbf{e_1} \wedge \mathbf{e_2} $
 $ fd = Energy + Torque $
 $ fd = {\color{red} f \cdot d} + {\color{blue} f \wedge d} $
 $ fd = {\color{red} Scalar} + {\color{blue} Bivector} = Multivector $
Isomorphism to quaternions  

Multiplying any 2 Pauli matrices results in a quaternion. Hence the geometric interpretation of the quaternion units $ \boldsymbol{\hat{\imath}}, \boldsymbol{\hat{\jmath}}, \boldsymbol{\hat{k}} \quad $ as bivectors in 3 dimensional (not 4 dimensional) space. Quaternions form a ^{*}division algebra—every nonzero element has an inverse—whereas Pauli matrices do not.
And multiplying a Pauli matrix and a quaternion results in a Pauli matrix: 
Further reading: ^{*}Generalizations of Pauli matrices, ^{*}GellMann matrices and ^{*}Pauli equation
PseudoEuclidean
 See also: ^{*}Electron magnetic moment
Gamma ^{*}matrices, $ \{ \gamma^0, \gamma^1, \gamma^2, \gamma^3 \} $, also known as the Dirac matrices, are a set of 4 × 4 conventional matrices with specific ^{*}anticommutation relations that ensure they ^{*}generate a matrix representation of the Clifford algebra^{w} Cℓ_{1,3}(R). One gamma matrix squares to 1 times the ^{*}identity matrix and three gamma matrices square to 1 times the identity matrix.
 $ (\gamma^0)^2 = I $
 $ (\gamma^1)^2 = (\gamma^2)^2 = (\gamma^3)^2 = I $
The defining property for the gamma matrices to generate a Clifford algebra^{w} is the anticommutation relation
 $ \displaystyle\{ \gamma^\mu, \gamma^\nu \} = \gamma^\mu \gamma^\nu + \gamma^\nu \gamma^\mu = 2 \eta^{\mu \nu} I_4 $
where $ \{ , \} $ is the ^{*}anticommutator, $ \eta^{\mu \nu} $ is the ^{*}Minkowski metric with signature (+ − − −) and $ I_4 $ is the 4 × 4 identity matrix.
Minkowski metric 

From Wikipedia:Minkowski_space#Minkowski_metric The simplest example of a Lorentzian manifold is ^{*}flat spacetime, which can be given as R^{4} with coordinates $ (t,x,y,z) $ and the metric
Note that these coordinates actually cover all of R^{4}. The flat space metric (or ^{*}Minkowski metric) is often denoted by the symbol η and is the metric used in ^{*}special relativity. A standard basis for Minkowski space is a set of four mutually orthogonal vectors { e_{0}, e_{1}, e_{2}, e_{3} } such that
These conditions can be written compactly in the form
Relative to a standard basis, the components of a vector v are written (v^{0}, v^{1}, v^{2}, v^{3}) where the ^{*}Einstein summation convention is used to write v = v^{μ}e_{μ}. The component v^{0} is called the timelike component of v while the other three components are called the spatial components. The spatial components of a 4vector v may be identified with a 3vector v = (v_{1}, v_{2}, v_{3}). In terms of components, the Minkowski inner product between two vectors v and w is given by
and
Here lowering of an index with the metric was used. The Minkowski metric^{[14]} η is the metric tensor of Minkowski space. It is a pseudoEuclidean metric, or more generally a constant pseudoRiemannian metric in Cartesian coordinates. As such it is a nondegenerate symmetric bilinear form, a type (0,2) tensor. It accepts two arguments u, v. The definition
yields an inner productlike structure on M, previously and also henceforth, called the Minkowski inner product, similar to the Euclidean inner product, but it describes a different geometry. It is also called the relativistic dot product. If the two arguments are the same,
the resulting quantity will be called the Minkowski norm squared. This bilinear form can in turn be written as
where [η] is a 4×4 matrix associated with η. Possibly confusingly, denote [η] with just η as is common practice. The matrix is read off from the explicit bilinear form as
and the bilinear form
with which this section started by assuming its existence, is now identified. 
When interpreted as the matrices of the action of a set of orthogonal basis vectors for ^{*}contravariant vectors in Minkowski space^{w}, the column vectors on which the matrices act become a space of ^{*}spinors, on which the Clifford algebra of ^{*}spacetime acts. This in turn makes it possible to represent infinitesimal ^{*}spatial rotations and Lorentz boosts^{w}. Spinors facilitate spacetime computations in general, and in particular are fundamental to the ^{*}Dirac equation for relativistic spin½ particles.
In Dirac representation^{w}, the four ^{*}contravariant gamma matrices are
 $ \begin{align} \gamma^0 &= \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix},\quad& \gamma^1 &= \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix} \\ \gamma^2 &= \begin{pmatrix} 0 & 0 & 0 & i \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ i & 0 & 0 & 0 \end{pmatrix},\quad& \gamma^3 &= \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}. \end{align} $
$ \gamma^0 $ is the timelike matrix and the other three are spacelike matrices.
 $ (\gamma^0)^2 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $
 $ (\gamma^1)^2 = (\gamma^2)^2 = (\gamma^3)^2 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $
The matrices are also sometimes written using the 2×2 ^{*}identity matrix, $ I_2 $, and the ^{*}Pauli matrices.
The gamma matrices we have written so far are appropriate for acting on ^{*}Dirac spinors written in the Dirac basis; in fact, the Dirac basis is defined by these matrices. To summarize, in the Dirac basis:
 $ \gamma^0 = \begin{pmatrix} I_2 & 0 \\ 0 & I_2 \end{pmatrix},\quad \gamma^k = \begin{pmatrix} 0 & \sigma^k \\ \sigma^k & 0 \end{pmatrix},\quad \gamma^5 = \begin{pmatrix} 0 & I_2 \\ I_2 & 0 \end{pmatrix}. $
Another common choice is the Weyl or chiral basis,^{[15]} in which $ \gamma^k $ remains the same but $ \gamma^0 $ is different, and so $ \gamma^5 $ is also different, and diagonal,
 $ \gamma^0 = \begin{pmatrix} 0 & I_2 \\ I_2 & 0 \end{pmatrix},\quad \gamma^k = \begin{pmatrix} 0 & \sigma^k \\ \sigma^k & 0 \end{pmatrix},\quad \gamma^5 = \begin{pmatrix} I_2 & 0 \\ 0 & I_2 \end{pmatrix}, $
Original Dirac matrices 

$ \begin{array}{cccc} \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix} & \begin{pmatrix} 0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{pmatrix} & \begin{pmatrix} 0&i&0&0\\ i&0&0&0\\ 0&0&0&i\\ 0&0&i&0 \end{pmatrix} & \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix} \\\hline \begin{pmatrix} 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0 \end{pmatrix} & {\color{red} \begin{pmatrix} 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0 \end{pmatrix} } & {\color{red} \begin{pmatrix} 0&0&0&i\\ 0&0&i&0\\ 0&i&0&0\\ i&0&0&0 \end{pmatrix} } & {\color{red} \begin{pmatrix} 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0 \end{pmatrix} } \\\hline \begin{pmatrix} 0&0&i&0\\ 0&0&0&i\\ i&0&0&0\\ 0&i&0&0 \end{pmatrix} & \begin{pmatrix} 0&0&0&i\\ 0&0&i&0\\ 0&i&0&0\\ i&0&0&0 \end{pmatrix} & \begin{pmatrix} 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0 \end{pmatrix} & \begin{pmatrix} 0&0&i&0\\ 0&0&0&i\\ i&0&0&0\\ 0&i&0&0 \end{pmatrix} \\\hline {\color{red} \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix}} & \begin{pmatrix} 0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{pmatrix} & \begin{pmatrix} 0&i&0&0\\ i&0&0&0\\ 0&0&0&i\\ 0&0&i&0 \end{pmatrix} & \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix} \end{array} $ Surprisingly the 4 by 4 table above forms a multiplication table even though it is actually created by the following rules:
where $ \sigma_i $ and $ \sigma_j $ are the original 2x2 Pauli matrices and $ \otimes $ is the ^{*}Kronecker product (not the tensor product)
The Dirac matrices are commonly referred to by the following name. Note that $ \sigma_i $ do not refer to the original Pauli matrices.
The 16 original Dirac matrices form six anticommuting sets of five matrices each (Arfken 1985, p. 214).
Any of the 15 original Dirac matrices (excluding the identity matrix $ \sigma_0 $) anticommute with eight other original Dirac matrices and commute with the remaining eight, including itself and the identity matrix. Any of the 16 original Dirac matrices multiplied times itself equals $ I_4 $ 
Higherdimensional gamma matrices 

^{*}Analogous sets of gamma matrices can be defined in any dimension and for any signature of the metric. For example, the Pauli matrices are a set of "gamma" matrices in dimension 3 with metric of Euclidean signature (3,0). In 5 spacetime dimensions, the 4 gammas above together with the fifth gamma matrix to be presented below generate the Clifford algebra. It is useful to define the product of the four gamma matrices as follows:
Although $ \gamma^5 $ uses the letter gamma, it is not one of the gamma matrices of Cℓ_{1,3}(R). The number 5 is a relic of old notation in which $ \gamma^0 $ was called "$ \gamma^4 $". From Wikipedia:Higherdimensional gamma matrices Consider a spacetime of dimension d with the flat ^{*}Minkowski metric,
where a,b = 0,1, ..., d−1. Set N= 2^{⌊d/2⌋}. The standard Dirac matrices correspond to taking d = N = 4. The higher gamma matrices are a dlong sequence of complex N×N matrices $ \Gamma_i,\ i=0,\ldots,d1 $ which satisfy the ^{*}anticommutator relation from the ^{*}Clifford algebra Cℓ_{1,d−1}(R) (generating a representation for it),
where I_{N} is the ^{*}identity matrix in N dimensions. (The spinors acted on by these matrices have N components in d dimensions.) Such a sequence exists for all values of d and can be constructed explicitly, as provided below. The gamma matrices have the following property under hermitian conjugation,

Further reading: Quantum Mechanics for Engineers and How (not) to teach Lorentz covariance of the Dirac equation
Multivectors
 See also: ^{*}Dirac algebra
External links:
 A brief introduction to geometric algebra
 A brief introduction to Clifford algebra
 The Construction of Spinors in Geometric Algebra
 Functions of Multivector Variables
 Clifford Algebra Representations
Clifford algebra is a type of algebra characterized by the geometric product of scalars, vectors, bivectors, trivectors...etc.
Just as a vector has length so a bivector has area and a trivector has volume.
Just as a vector has direction so a bivector has orientation. In three dimensions a trivector has only one possible orientation and is therefore a pseudoscalar. But in four dimensions a trivector becomes a pseudovector and the quadvector becomes the pseudoscalar.
Rules
All the properties of Clifford algebra derive from a few simple rules.
Let $ {\color{blue}e_{x}}, $ $ {\color{blue}e_{y}}, $ and $ {\color{blue}e_{z}} $ be perpendicular unit vectors.
Multiplying two perpendicular vectors results in a bivector:
 $ e_x e_y = {\color{green}e_{xy}} $
Multiplying three perpendicular vectors results in a trivector:
 $ e_x e_y e_z = {\color{orange}e_{xyz}} $
Multiplying parallel vectors results in a scalar:
 $ e_x e_x = {\color{red}e_{xx}} = {\color{red}1} $
Clifford algebra is associative therefore the fact that multiplying parallel vectors results in a scalar means that:
 $ \begin{split} {\color{green}(e_{x} e_{y})} {\color{blue} (e_{y}) } &= {\color{blue} e_{x}} {\color{red}(e_{y} e_{y})} \\ &= {\color{blue} e_{x}} {\color{red}(1)} \\ &= {\color{blue} e_{x}} \end{split} $
 and:
 $ \begin{align} {\color{green} (e_{x} e_{y})} {\color{blue} (e_{y} + e_{z}) } &= {\color{blue} e_{xyy}} + {\color{orange}e_{xyz}} \\ &= {\color{blue} e_{x}} + {\color{orange}e_{xyz}} \end{align} $
 and:
 $ \begin{align} {\color{green} (e_{x} e_{y}) } {\color{blue} (e_{z}) } &= {\color{orange} e_{xyz}} \end{align} $
Rotation from x to y is the negative of rotation from y to x:
 $ e_{xy} = e_{yx} $
 Therefore:
 $ \begin{split} {\color{green}(e_{x} e_{y})} {\color{blue} (e_{x}) } &= \phantom{}{\color{blue} e_{x}} {\color{green}(e_{y} e_{x})} \\ &= {\color{blue} e_{x}} {\color{green}(e_{x} e_{y})} \\ &= {\color{red} (e_{x} e_{x})} {\color{blue} e_{y}} \\ &= {\color{red} (1)} {\color{blue} e_{y}} \\ &= {\color{blue} e_{y}} \end{split} $
Multiplication tables
 In one dimension:
 $ \begin{array}{rr} {\color{red} 1 } & {\color{blue} e_{x}} \\\hline {\color{blue} e_{x}} & {\color{red} 1 } \end{array} $
 In two dimensions:
 $ \begin{array}{rrrr} {\color{red} 1 } & {\color{blue} e_{x}} & {\color{blue} e_{y}} & {\color{green} e_{xy}} \\\hline {\color{blue} e_{x}} & {\color{red} 1 } & {\color{green} e_{xy}} & {\color{blue} {e_{y}}} \\ {\color{blue} e_{y}} & {\color{green} e_{xy}} & {\color{red} 1 } & {\color{blue} e_{x}} \\\hline {\color{green} e_{xy}} & {\color{blue} e_{y}} & {\color{blue} {e_{x}}} & {\color{red} {1}} \end{array} $
 In three dimensions:
 $ \begin{array}{rrrrrrrr} {\color{red} 1 } & {\color{blue} e_{x}} & {\color{blue} e_{y}} & {\color{blue} e_{z}} & {\color{green} e_{xy}} & {\color{green} e_{xz}} & {\color{green} e_{yz}} & {\color{orange} e_{xyz}} \\\hline {\color{blue} e_{x}} & {\color{red} 1 } & {\color{green} e_{xy}} & {\color{green} e_{xz}} & {\color{blue} e_{y}} & {\color{blue} e_{z}} & {\color{orange} e_{xyz}} & {\color{green} e_{yz}} \\ {\color{blue} e_{y}} & {\color{green} e_{xy}} & {\color{red} 1 } & {\color{green} e_{yz}} & {\color{blue} e_{x}} & {\color{orange} e_{xyz}} & {\color{blue} e_{z}} & {\color{green} e_{xz}} \\ {\color{blue} e_{z}} & {\color{green} e_{xz}} & {\color{green} e_{yz}} & {\color{red} 1 } & {\color{orange} e_{xyz}}& {\color{blue} e_{x}} & {\color{blue} e_{y}} & {\color{green} e_{xy}} \\\hline {\color{green} e_{xy}} & {\color{blue} e_{y}} & {\color{blue} e_{x}} & {\color{orange} e_{xyz}} & {\color{red} 1} & {\color{green} e_{yz}} & {\color{green} e_{xz}} & {\color{blue} e_{z}} \\ {\color{green} e_{xz}} & {\color{blue} e_{z}} & {\color{orange} e_{xyz}} & {\color{blue} e_{x}} & {\color{green} e_{yz}} & {\color{red} 1 } & {\color{green} e_{xy}} & {\color{blue} e_{y}} \\ {\color{green} e_{yz}} & {\color{orange} e_{xyz}} & {\color{blue} e_{z}} & {\color{blue} e_{y}} & {\color{green} e_{xz}} & {\color{green} e_{xy}} & {\color{red} 1 } & {\color{blue} e_{x}} \\\hline {\color{orange} e_{xyz}} & {\color{green} e_{yz}} & {\color{green} e_{xz}} & {\color{green} e_{xy}} & {\color{blue} e_{z}} & {\color{blue} e_{y}} & {\color{blue} e_{x}} & {\color{red} 1 } \end{array} $
 In four dimensions:

Multiplication of arbitrary vectors
The dot product of two vectors is:
 $ \begin{split} \mathbf{u} \cdot \mathbf{v} &= {\color{blue}\text{vector}} \cdot {\color{blue}\text{vector}} \\ &= (u_{x} + u_{y})(v_{x} + v_{y}) \\ &= {\color{red}u_{x} v_{x} + u_{y} v_{y}} \end{split} $
But this is actually quite mysterious. When we multiply $ (a_1 + a_2) $ and $ (b_1 + b_2) $ we dont get $ (a_1 b_1 + a_2 b_2) $ so why is it that when we multiply vectors we only multiply parallel components? Clifford algebra has a surprisingly simple answer: We dont! Instead of the dot product or the wedge product Clifford algebra uses the geometric product.
 $ \begin{split} \mathbf{u} \mathbf{v} &= (u_{x} {\color{blue}e_{x}} + u_{y} {\color{blue} e_{y}} ) (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue} e_{y}} ) \\ &= u_{x} v_{x} {\color{red}e_{x} e_{x}} + u_{x} v_{y} {\color{green}e_{x} e_{y}} + u_{y} v_{x} {\color{green}e_{y} e_{x}} + u_{y} v_{y} {\color{red}e_{y} e_{y}} \\ &= u_{x} v_{x} {\color{red}(1)} + u_{y} v_{y} {\color{red}(1)} + u_{x} v_{y} {\color{green}e_{x} e_{y}}  u_{y} v_{x} {\color{green}e_{x} e_{y}} \\ &= (u_{x} v_{x} + u_{y} v_{y}){\color{red}(1)} + (u_{x} v_{y}  u_{y} v_{x}) {\color{green}e_{xy}} \\ &= {\color{red}\text{scalar}} + {\color{green}\text{bivector}} \end{split} $
As stated above $ \mathbf{u} \mathbf{v} = {\color{red}\text{scalar}} + {\color{green}\text{bivector}}. $ A scalar plus a bivector (or any number of blades of different grade) is called a multivector. The idea of adding a scalar and a bivector might seem wrong but in the real world it just means that what appears to be a single equation is in fact a set of ^{*}simultaneous equations.
For example:
 $ \mathbf{u} \mathbf{v} = 5 $
 would just mean that:
 $ (u_{x} v_{x} + u_{y} v_{y}){\color{red}(1)} = 5 \\ \quad \quad \quad \text{and} \\ (u_{x} v_{y}  u_{y} v_{x}) {\color{green}e_{xy}} = 0 $
Basis
Every multivector of the Clifford algebra can be expressed as a linear combination of the canonical basis elements. The basis elements of the Clifford algebra Cℓ_{3} are $ \{{\color{red}1}, \, {\color{blue}e_x}, \, {\color{blue}e_y}, \, {\color{blue}e_z}, \, {\color{green}e_{xy}}, \, {\color{green}e_{xz}}, \, {\color{green}e_{yz}}, \, {\color{orange}e_{xyz}} \} $ and the general element of Cℓ_{3} is given by
 $ A = a_0{\color{red}(1)} + a_1 {\color{blue}e_x} + a_2 {\color{blue}e_y} + a_3 {\color{blue}e_z} + a_4 {\color{green}e_{xy}} + a_5 {\color{green}e_{xz}} + a_6 {\color{green}e_{yz}} + a_7 {\color{orange}e_{xyz}}. $
If $ a_0, a_1, a_2, a_3, a_4, a_5, a_6, a_7 $ are all real then the Clifford algebra is Cℓ_{3}(R). If the coefficients are allowed to be complex then the Clifford algebra is Cℓ_{3}(C).
A multivector can be separated into components of different grades:
 $ \langle \mathbf{A} \rangle_0 = a_0{\color{red}(1)}\\ \langle \mathbf{A} \rangle_1 = a_1 {\color{blue}e_x} + a_2 {\color{blue}e_y} + a_3 {\color{blue}e_z}\\ \langle \mathbf{A} \rangle_2 = a_4 {\color{green}e_{xy}} + a_5 {\color{green}e_{xz}} + a_6 {\color{green}e_{yz}}\\ \langle \mathbf{A} \rangle_3 = a_7 {\color{orange}e_{xyz}}\\ $
The elements of even grade form a subalgebra because the sum or product of even grade elements always results in an element of even grade. The elements of odd grade do not form a subalgebra.
Relation to other algebras
$ Cℓ_0 (\mathbf{R}) $: Real numbers (scalars). A scalar can (and should) be thought of as zero vectors multiplied together. See Empty product.
$ Cℓ_0 (\mathbf{C}) $: Complex numbers
$ Cℓ_1 (\mathbf{R}) $: Splitcomplex numbers
 $ \begin{array}{rr} {\color{red} 1 } & {\color{blue} e_{x}} \\\hline {\color{blue} e_{x}} & {\color{red} 1 } \end{array} $
$ Cℓ_1 (\mathbf{C}) $: Bicomplex numbers
$ Cℓ_2^0 (\mathbf{R}) $: Complex numbers (The superscript 0 indicates the even subalgebra)
 $ \begin{array}{rr} {\color{red} 1} & {\color{green} e_{xy}} \\\hline {\color{green} e_{xy}} & {\color{red} {1}} \end{array} $
$ Cℓ_3^0 (\mathbf{R}) $: Quaternions
 $ \begin{array}{rrrr} {\color{red} 1} & {\color{green} e_{xy}} & {\color{green} e_{xz}} & {\color{green} e_{yz}} \\\hline {\color{green} e_{xy}} & {\color{red} 1 } & {\color{green} e_{yz}} & {\color{green} e_{xz}} \\ {\color{green} e_{xz}} & {\color{green} e_{yz}} & {\color{red} 1 } & {\color{green} e_{xy}} \\ {\color{green} e_{yz}} & {\color{green} e_{xz}} & {\color{green} e_{xy}} & {\color{red} 1 } \end{array} $
$ Cℓ_3^0 (\mathbf{C}) $: Biquaternions
Multivector multiplication using tensors
To find the product
 $ AB = (a_0 {\color{red} 1 } + a_1 {\color{blue} e_{x}} + a_2 {\color{blue} e_{y}} + a_3 {\color{green} e_{xy}}) (b_0 {\color{red} 1 } + b_1 {\color{blue} e_{x}} + b_2 {\color{blue} e_{y}} + b_3 {\color{green} e_{xy}}) $
we have to multiply every component of the first multivector with every component of the second multivector.
 $ \begin{split} AB = & \phantom{+} (a_0 b_0 {\color{red}1}{\color{red}1} + a_0 b_1 {\color{red}1}{\color{blue} e_{x}} + a_0 b_2 {\color{red}1}{\color{blue} e_{y}} + a_0 b_3 {\color{red}1}{\color{green} e_{xy}}) \\ &+ (a_1 b_0 {\color{blue} e_{x}}{\color{red}1} + a_1 b_1 {\color{blue} e_{x}}{\color{blue} e_{x}} + a_1 b_2 {\color{blue} e_{x}}{\color{blue} e_{y}} + a_1 b_3 {\color{blue} e_{x}}{\color{green} e_{xy}}) \\ &+ (a_2 b_0 {\color{blue} e_{y}}{\color{red}1} + a_2 b_1 {\color{blue} e_{y}}{\color{blue} e_{x}} + a_2 b_2 {\color{blue} e_{y}}{\color{blue} e_{y}} + a_2 b_3 {\color{blue} e_{y}}{\color{green} e_{xy}}) \\ &+ (a_3 b_0 {\color{green} e_{xy}}{\color{red}1} + a_3 b_1 {\color{green} e_{xy}}{\color{blue} e_{x}} + a_3 b_2 {\color{green} e_{xy}}{\color{blue} e_{y}} + a_3 b_3 {\color{green} e_{xy}}{\color{green} e_{xy}}) \end{split} $
Then we reduce each of the 16 resulting terms to its standard form.
 $ \begin{split} AB = & \phantom{+} (a_0 b_0 {\color{red}1} + a_0 b_1 {\color{blue} e_{x}} + a_0 b_2 {\color{blue} e_{y}} + a_0 b_3 {\color{green} e_{xy}}) \\ &+ (a_1 b_0 {\color{blue} e_{x}} + a_1 b_1 {\color{red}1} + a_1 b_2 {\color{green} e_{xy}} + a_1 b_3 {\color{blue} e_{y}}) \\ &+ (a_2 b_0 {\color{blue} e_{y}}  a_2 b_1 {\color{green} e_{xy}} + a_2 b_2 {\color{red}1}  a_2 b_3 {\color{blue} e_{x}}) \\ &+ (a_3 b_0 {\color{green} e_{xy}}  a_3 b_1 {\color{blue} e_{y}} + a_3 b_2 {\color{blue} e_{x}}  a_3 b_3 {\color{red}1}) \end{split} $
Finally we collect like products into the four components of the final multivector.
 $ \begin{split} AB = & \phantom{+} ( a_0 b_0 + a_1 b_1 + a_2 b_2  a_3 b_3 ) {\color{red}1} \\ &+ ( a_1 b_0 + a_0 b_1 + a_3 b_2  a_2 b_3 ) {\color{blue} e_{x} } \\ &+ ( a_2 b_0  a_3 b_1 + a_0 b_2 + a_1 b_3 ) {\color{blue} e_{y} } \\ &+ ( a_3 b_0  a_2 b_1 + a_1 b_2 + a_0 b_3 ) {\color{green} e_{xy} } \end{split} $
This is all very tedious and errorprone. It would be nice if there was some way to cut straight to the end. Tensor notation allows us to do just that.
To find the tensor that we need we first need to know which terms end up as scalars, which terms end up as vectors...etc. There is an easy way to do this and it involves the multiplication table.
First lets start with an easy one.
Complex numbers
The multiplication table for $ Cℓ_2^0 (\mathbf{R}) $
 $ \begin{array}{rr} {\color{red} 1} & {\color{green} e_{xy}} \\\hline {\color{green} e_{xy}} & {\color{red} 1 } \end{array} $
We can see then that:

It worked! All the terms in the first row are scalars and all the terms in the second row are bivectors. This is exactly what we are looking for.

Pay special attention to the signs in the final matrix above.
Therefore to find the product
 $ (a_0 {\color{red} 1 } + a_1 {\color{green} e_{xy}} ) (b_0 {\color{red} 1 } + b_1 {\color{green} e_{xy}} ) $
We would multiply:
 $ \left( \begin{array}{rr} {\color{red} b_0 } & {\color{green} b_1 } \\ {\color{green} b_1 } & {\color{red} b_0 } \end{array} \right) \left( \begin{array}{r} {\color{red} a_0 } \\ {\color{green} a_1 } \end{array} \right) = \left( \begin{array}{rcr} {\color{red} b_0 } {\color{red} a_0 }&  & {\color{red} b_1 } {\color{red} a_1 } \\ {\color{green} b_1 } {\color{green} a_0 }& + & {\color{green} b_0 } {\color{green} a_1 } \end{array} \right) $
Each row of the final matrix has exactly the right terms with exactly the right signs.
The vector above represents a complex number. You should think of the first column of the matrix above as representing another complex number. All the other terms in the matrix are just there to make our lives a little bit easier.
It works. It works so well that complex numbers can be represented as matrices as:
 $ \begin{bmatrix} a & b \\ b & a \end{bmatrix} $
Which for some odd reason corresponds perfectly to a multiplication table for complex numbers:
$ \begin{array}{rr} {\color{red} 1} & {\color{green} i} \\\hline {\color{green} i} & {\color{red} 1 } \end{array} $
$ \quad \quad \begin{array}{rr} {\color{red} 1} & {\color{green} e_{xy}} \\\hline {\color{green} e_{xy}} & {\color{red} 1 } \end{array} $
Quaternions
The multiplication table for $ Cℓ_3^0 (\mathbf{R}) $ is:
 $ \begin{array}{rrrr} {\color{red} 1} & {\color{green} e_{xy}} & {\color{green} e_{xz}} & {\color{green} e_{yz}} \\\hline {\color{green} e_{xy}} & {\color{red} 1 } & {\color{green} e_{yz}} & {\color{green} e_{xz}} \\ {\color{green} e_{xz}} & {\color{green} e_{yz}} & {\color{red} 1 } & {\color{green} e_{xy}} \\ {\color{green} e_{yz}} & {\color{green} e_{xz}} & {\color{green} e_{xy}} & {\color{red} 1 } \end{array} \quad \quad \quad \begin{array}{l} i = e_{yz} \\ j = e_{xz} \\ k = e_{xy} \end{array} $
The entire 2nd row of the multiplication table is just $ {\color{green} e_{xy}} $ multiplied by the entire first row.
The entire 3rd row of the multiplication table is just $ {\color{green} e_{xz}} $ multiplied by the entire first row.
The entire 4th row of the multiplication table is just $ {\color{green} e_{yz}} $ multiplied by the entire first row.
We can see then that if we multiply each row by the first row again then we get:

This works because we have in effect multiplied each term by a second term twice. In other words we have multiplied every term by the square of another term and the square of every term is either 1 or 1.
Therefore to find the product
 $ (a_0 {\color{red} 1 } + a_1 {\color{green} e_{xy}} + a_2 {\color{green} e_{xz}} + a_3 {\color{green} e_{yz}}) (b_0 {\color{red} 1 } + b_1 {\color{green} e_{xy}} + b_2 {\color{green} e_{xz}} + b_3 {\color{green} e_{yz}}) $
We would multiply:
 $ \left( \begin{array}{rrrr} {\color{red} b_0 } & {\color{green} b_1 } & {\color{green} b_2 } & {\color{green} b_3 } \\ {\color{green} b_1 } & {\color{red} b_0 } & {\color{green} b_3 } & {\color{green} {b_2 }} \\ {\color{green} b_2 } & {\color{green} b_3 } & {\color{red} b_0 } & {\color{green} b_1 } \\ {\color{green} b_3 } & {\color{green} b_2 } & {\color{green} {b_1 }} & {\color{red} b_0 } \end{array} \right) \left( \begin{array}{rrrr} {\color{red} a_0 } \\ {\color{green} a_1 } \\ {\color{green} a_2 } \\ {\color{green} a_3 } \end{array} \right) = \left( \begin{array}{rrrr} {\color{red} b_0 } {\color{red} a_0 } &  & {\color{red} b_1 } {\color{red} a_1 } &  & {\color{red} b_2 } {\color{red} a_2 } &  & {\color{red} b_3 } {\color{red} a_3 } \\ {\color{green} b_1 } {\color{green} a_0 } & + & {\color{green} b_0 } {\color{green} a_1 } & + & {\color{green} b_3 } {\color{green} a_2 } &  & {\color{green} {b_2 }} {\color{green} a_3 } \\ {\color{green} b_2 } {\color{green} a_0 } &  & {\color{green} b_3 } {\color{green} a_1 } & + & {\color{green} b_0 } {\color{green} a_2 } & + & {\color{green} b_1 } {\color{green} a_3 } \\ {\color{green} b_3 } {\color{green} a_0 } & + & {\color{green} b_2 } {\color{green} a_1 } &  & {\color{green} {b_1 }} {\color{green} a_2 } & + & {\color{green} b_0 } {\color{green} a_3 } \end{array} \right) $
Just as complex numbers can be represented as matrices, so a quaternion can be represented as:
 $ \left( \begin{array}{rrrr} a & b & c & d \\ b & a & d &c \\ c & d & a & b \\ d & c & b & a \end{array} \right)= \left( \begin{array}{rr} \left( \begin{array}{rr} a & b \\ b & a \end{array} \right) &  \left( \begin{array}{rr} c & d \\ d &c \end{array} \right) \\ \left( \begin{array}{rr} c & d \\ d & c \end{array} \right) & \left( \begin{array}{rr} a & b \\ b & a \end{array} \right) \end{array} \right) $
Which corresponds to a multiplication table for quaternions:

$ \begin{array}{rrrr} {\color{red} 1} & {\color{green} e_{xy}} & {\color{green} e_{xz}} & {\color{green} e_{yz}} \\\hline {\color{green} e_{xy}} & {\color{red} 1 } & {\color{green} e_{yz}} & {\color{green} e_{xz}} \\ {\color{green} e_{xz}} & {\color{green} e_{yz}} & {\color{red} 1 } & {\color{green} e_{xy}} \\ {\color{green} e_{yz}} & {\color{green} e_{xz}} & {\color{green} e_{xy}} & {\color{red} 1 } \end{array} $ 
CL_{2}
The multiplication table for $ Cℓ_{2} (\mathbf{R}) $ is:
 $ \begin{array}{rrrr} {\color{red} 1 } & {\color{blue} e_{x}} & {\color{blue} e_{y}} & {\color{green} e_{xy}} \\\hline {\color{blue} e_{x}} & {\color{red} 1 } & {\color{green} e_{xy}} & {\color{blue} {e_{y}}} \\ {\color{blue} e_{y}} & {\color{green} e_{xy}} & {\color{red} 1 } & {\color{blue} e_{x}} \\\hline {\color{green} e_{xy}} & {\color{blue} e_{y}} & {\color{blue} {e_{x}}} & {\color{red} {1}} \end{array} $
We can see then that:

Therefore to find the product
 $ (a_0 {\color{red} 1 } + a_1 {\color{blue} e_{x}} + a_2 {\color{blue} e_{y}} + a_3 {\color{green} e_{xy}}) (b_0 {\color{red} 1 } + b_1 {\color{blue} e_{x}} + b_2 {\color{blue} e_{y}} + b_3 {\color{green} e_{xy}}) $
We would multiply:
 $ \left( \begin{array}{rrrr} {\color{red} b_0 } & {\color{blue} b_1 } & \phantom{} {\color{blue} b_2 } & {\color{green} b_3 } \\ {\color{blue} b_1 } & {\color{red} b_0 } & {\color{green} b_3 } & {\color{blue} {b_2 }} \\ {\color{blue} b_2 } & {\color{green} b_3 } & {\color{red} b_0 } & {\color{blue} b_1 } \\ {\color{green} b_3 } & {\color{blue} b_2 } & {\color{blue} {b_1 }} & {\color{red} b_0 } \end{array} \right) \left( \begin{array}{rrrr} {\color{red} a_0 } \\ {\color{blue} a_1 } \\ {\color{blue} a_2 } \\ {\color{green} a_3 } \end{array} \right) = \left( \begin{array}{rrrr} {\color{red} b_0 } {\color{red} a_0 } & + & {\color{red} b_1 } {\color{red} a_1 } & + & {\color{red} b_2 } {\color{red} a_2 } &  & {\color{red} b_3 } {\color{red} a_3 } \\ {\color{blue} b_1 } {\color{blue} a_0 } & + & {\color{blue} b_0 } {\color{blue} a_1 } & + & {\color{blue} b_3 } {\color{blue} a_2 } &  & {\color{blue} {b_2 }} {\color{blue} a_3 } \\ {\color{blue} b_2 } {\color{blue} a_0 } &  & {\color{blue} b_3 } {\color{blue} a_1 } & + & {\color{blue} b_0 } {\color{blue} a_2 } & + & {\color{blue} b_1 } {\color{blue} a_3 } \\ {\color{green} b_3 } {\color{green} a_0 } &  & {\color{green} b_2 } {\color{green} a_1 } & + & {\color{green} {b_1 }} {\color{green} a_2 } & + & {\color{green} b_0 } {\color{green} a_3 } \end{array} \right) $
Squares of pseudoscalars are either 1 or 1
In 0 dimensions:
 $ (1)^2 = 1 $
In 1 dimension:
 $ (e_{1})^2 = e_{11} = 1 $
In 2 dimensions:
 $ (e_{12})^2 = e_{1212} = e_{11 \color{red}{22}} = 1 $
In 3 dimensions:
 $ (e_{123})^2 = e_{123123} = e_{1212 \color{red}{33}} = 1 $
In 4 dimensions:
 $ (e_{1234})^2 = e_{12341234} = e_{123123 \color{red}{44}} = 1 $
In 5 dimensions:
 $ (e_{12345})^2 = e_{1234512345} = e_{12341234 \color{red}{55}} = 1 $
In 6 dimensions:
 $ (e_{123456})^2 = e_{123456123456} = e_{1234512345 \color{red}{66}} = 1 $
In 7 dimensions:
 $ (e_{1234567})^2 = e_{12345671234567} = e_{123456123456 \color{red}{77}} = 1 $
In 8 dimensions:
 $ (e_{12345678})^2 = e_{1234567812345678} = e_{12345671234567 \color{red}{88}} = 1 $
In 9 dimensions:
 $ (e_{123456789})^2 = e_{123456789123456789} = e_{1234567812345678 \color{red}{99}} = 1 $
Bivectors in higher dimensions
A simple bivector can be used to represent a single rotation. But in four dimensions a rigid object can rotate in two different ways simultaneously. Such a rotation can only be represented as the sum of two simple bivectors. In six dimensions a rigid object can rotate in three different ways simultaneously. Such a rotation can only be represented as the sum of three simple bivectors.
From Wikipedia:Bivector
The wedge product of two vectors is a bivector, but not all bivectors are wedge products of two vectors. For example, in four dimensions the bivector
 $ \mathbf{B} = \mathbf{e}_1 \wedge \mathbf{e}_2 + \mathbf{e}_3 \wedge \mathbf{e}_4 = \mathbf{e}_1 \mathbf{e}_2 + \mathbf{e}_3\mathbf{e}_4 = \mathbf{e}_{12} + \mathbf{e}_{34} $
cannot be written as the wedge product of two vectors. A bivector that can be written as the wedge product of two vectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions;
A bivector has a real square if and only if it is simple.
 $ \begin{align} (\mathbf{e}_{12})^2 &= e_{1212} \\ &= 1 \end{align} $
 But:
 $ \begin{align} (\mathbf{e}_{12} + \mathbf{e}_{34})^2 &= e_{1212} + e_{1234} + e_{3412} + e_{3434} \\ &= e_{1122} + e_{1234} + e_{1234}  e_{3344} \\ &= 1 + 2 e_{1234} 1 \\ &= 2 e_{1234}  2 \end{align} $
Other quadratic forms
The square of a vector is:
 $ \begin{split} \mathbf{v} \mathbf{v} &= (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue}e_{y}}) (v_{x} {\color{blue}e_{x}} + v_{y} {\color{blue}e_{y}}) \\ &= v_{x} v_{x} {\color{red}e_{x} e_{x}} + v_{x} v_{y} {\color{green}e_{x} e_{y}} + v_{y} v_{x} {\color{green}e_{y} e_{x}} + v_{y} v_{y} {\color{red}e_{y} e_{y}} \\ &= v_{x} v_{x} {\color{red}(1)} + v_{y} v_{y} {\color{red}(1)} + v_{x} v_{y} {\color{green}e_{x} e_{y}}  v_{y} v_{x} {\color{green}e_{x} e_{y}} \\ &= (v_{x} v_{x} + v_{y} v_{y}){\color{red}(1)} + (v_{x} v_{y}  v_{y} v_{x}) {\color{green}e_{xy}} \\ &= (v_{x}^2 + v_{y}^2){\color{red}(1)} + (0) {\color{green}e_{xy}} \\ &= (v_{x}^2 + v_{y}^2){\color{red}(1)} \\ &= {\color{red}\text{scalar}} \end{split} $
 ($ v_{x}^2 + v_{y}^2 $) is called the quadratic form.
From Wikipedia:Clifford algebra:
Every nondegenerate quadratic form on a finitedimensional real vector space is equivalent to the standard diagonal form:
 $ Q(v) = v^2 = v_1^2 + \cdots + v_p^2  v_{p+1}^2  \cdots  v_{p+q}^2 , $
where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the ^{*}signature of the quadratic form. The real vector space with this quadratic form is often denoted R^{p,q}. The Clifford algebra on R^{p,q} is denoted Cℓ_{p,q}(R). The symbol Cℓ_{n}(R) means either Cℓ_{n,0}(R) or Cℓ_{0,n}(R) depending on whether the author prefers positivedefinite or negativedefinite spaces.
A standard basis^{w} {e_{i}} for R^{p,q} consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. The algebra Cℓ_{p,q}(R) will therefore have p vectors that square to +1 and q vectors that square to −1.
From Wikipedia:Spacetime algebra:
^{*}Spacetime algebra (STA) is a name for the Clifford algebra^{w} Cl_{3,1}(R), or equivalently the geometric algebra^{w} G(M^{4}), which can be particularly closely associated with the geometry of special relativity^{w} and relativistic spacetime^{w}. See also ^{*}Algebra of physical space.
The spacetime algebra may be built up from an orthogonal basis of one timelike vector $ \gamma_0 $ and three spacelike vectors, $ \{\gamma_1, \gamma_2, \gamma_3\} $, with the multiplication rule
 $ \gamma_\mu \gamma_\nu + \gamma_\nu \gamma_\mu = 2 \eta_{\mu \nu} $
where $ \eta_{\mu \nu} $ is the Minkowski metric^{w} with signature (− + + +).
Thus:
 $ \gamma_0^2 = {1} $
 $ \gamma_1^2 = \gamma_2^2 = \gamma_3^2 = {+1} $
 $ \gamma_\mu \gamma_\nu =  \gamma_\nu \gamma_\mu $
The basis vectors $ \gamma_k $ share these properties with the ^{*}Gamma matrices, but no explicit matrix representation need be used in STA.
$ Cℓ_{3,0} (\mathbf{R}) $:Algebra of physical space (Time = scalar)
$ Cℓ_{3,1} (\mathbf{R}) $:Spacetime algebra (Time = vector)
$ Cℓ_{0,2} (\mathbf{R}) $:Quaternions (Three quaternions = two vectors that square to 1 and one bivector that squares to 1)
Rotors
 See also: ^{*}Rotor (mathematics)
The inverse of a vector is:
 $ v^{1} = \frac{1}{v} = \frac{v}{vv} = \frac{v}{v \cdot v + v \wedge v} = \frac{v}{v \cdot v} $
The projection of $ v $ onto $ a $ (or the parallel part) is
 $ v_{\ a} = (v \cdot a)a^{1} $
and the rejection of $ v $ from $ a $ (or the orthogonal part) is
 $ v_{\perp a} = v  v_{\ a} = (v\wedge a)a^{1} . $
The reflection $ v' $ of a vector $ v $ along a vector $ a $, or equivalently across the hyperplane orthogonal to $ a $, is the same as negating the component of a vector parallel to $ a $. The result of the reflection will be

If a is a unit vector then $ a^{1}=\frac{a}{1} = a $ and therefore $ v' = ava $
$ ava $ is called the sandwich product which is called a doublesided product.
If we have a product of vectors $ R = a_1a_2 \cdots a_r $ then we denote the reverse as
 $ R^\dagger = (a_1a_2\cdots a_r)^\dagger = a_r\cdots a_2 a_1. $
Any rotation is equivalent to 2 reflections.
 $ v'' = bv'b = bavab = RvR^\dagger $
R is called a Rotor
 $ R = ba = b \cdot a + b \wedge a = Scalar + Bivector = Multivector $
If a and b are unit vectors then the rotor is automatically normalised:
 $ RR^\dagger = R^\dagger R=1 . $
2 rotations becomes:
 $ R_2R_1MR_1^\dagger R_2^\dagger $
R_{2}R_{1} represents Rotor R_{1} rotated by Rotor R_{2}. This would be called a singlesided transformation. (R_{2}R_{1}R_{2} would be doublesided.) Therefore rotors do not transform doublesided the same way that other objects do. They transform singlesided.
Quaternions
The square root of the product of a quaternion with its conjugate is called its ^{*}norm:
 $ \lVert q \rVert = \sqrt{qq^*} = \sqrt{q^*q} = \sqrt{a^2 + b^2 + c^2 + d^2} $
A unit quaternion is a quaternion of norm one. Unit quaternions, also known as ^{*}versors, provide a convenient mathematical notation for representing orientations and rotations of objects in three dimensions.
Every nonzero quaternion has a multiplicative inverse
 $ (a+bi+cj+dk)^{1} = \frac{1}{a^2+b^2+c^2+d^2}\,(abicjdk). $
Thus quaternions form a ^{*}division algebra.
The inverse of a unit quaternion is obtained simply by changing the sign of its imaginary components.
A ^{*}3D Euclidean vector such as (2, 3, 4) or (a_{x}, a_{y}, a_{z}) can be rewritten as 0 + 2 i + 3 j + 4 k or 0 + a_{x} i + a_{y} j + a_{z} k, where i, j, k are unit vectors representing the three ^{*}Cartesian axes. A rotation through an angle of θ around the axis defined by a unit vector
 $ \vec{u} = (u_x, u_y, u_z) = 0 + u_x\mathbf{i} + u_y\mathbf{j} + u_z\mathbf{k} $
can be represented by a quaternion. This can be done using an ^{*}extension of Euler's formula^{w}:
 $ \mathbf{q} = e^{\frac{\theta}{2}{(0 + u_x\mathbf{i} + u_y\mathbf{j} + u_z\mathbf{k})}} = \cos \frac{\theta}{2} + (0 + u_x\mathbf{i} + u_y\mathbf{j} + u_z\mathbf{k}) \sin \frac{\theta}{2} $
It can be shown that the desired rotation can be applied to an ordinary vector $ \mathbf{p} = (p_x, p_y, p_z) = 0 + p_x\mathbf{i} + p_y\mathbf{j} + p_z\mathbf{k} $ in 3dimensional space, considered as a quaternion with a real coordinate equal to zero, by evaluating the conjugation of p by q:
 $ \mathbf{p'} = \mathbf{q} \mathbf{p} \mathbf{q}^{1} $
using the ^{*}Hamilton product
The conjugate of a product of two quaternions is the product of the conjugates in the reverse order.
Conjugation by the product of two quaternions is the composition of conjugations by these quaternions: If p and q are unit quaternions, then rotation (conjugation) by pq is
 $ \mathbf{p q} \vec{v} (\mathbf{p q})^{1} = \mathbf{p q} \vec{v} \mathbf{q}^{1} \mathbf{p}^{1} = \mathbf{p} (\mathbf{q} \vec{v} \mathbf{q}^{1}) \mathbf{p}^{1} $,
which is the same as rotating (conjugating) by q and then by p. The scalar component of the result is necessarily zero.
The imaginary part $ b\mathbf{i} + c\mathbf{j} + d\mathbf{k} $ of a quaternion behaves like a vector $ \vec{v} = (b,c,d) $ in three dimension vector space, and the real part a behaves like a ^{*}scalar in R. When quaternions are used in geometry, it is more convenient to define them as ^{*}a scalar plus a vector:
 $ a + b\mathbf{i} + c\mathbf{j} + d\mathbf{k} = a + \vec{v}. $
When multiplying the vector/imaginary parts, in place of the rules i^{2} = j^{2} = k^{2} = ijk = −1 we have the quaternion multiplication rule:
 $ \vec{v} \vec{w} = \vec{v} \times \vec{w}  \vec{v} \cdot \vec{w}, $
From these rules it follows immediately that (^{*}see details):
 $ (s + \vec{v}) (t + \vec{w}) = (s t  \vec{v} \cdot \vec{w}) + (s \vec{w} + t \vec{v} + \vec{v} \times \vec{w}). $
It is important to note, however, that the vector part of a quaternion is, in truth, an "axial" vector or "pseudovector", not an ordinary or "polar" vector.
the reflection of a vector r in a plane perpendicular to a unit vector w can be written:
 $ r^{\prime} =  w\, r\, w. $
Two reflections make a rotation by an angle twice the angle between the two reflection planes, so
 $ v^{\prime\prime} = \sigma_2 \sigma_1 \, v \, \sigma_1 \sigma_2 $
corresponds to a rotation of 180° in the plane containing σ_{1} and σ_{2}.
This is very similar to the corresponding quaternion formula,
 $ v^{\prime\prime} = \mathbf{k}\, v\, \mathbf{k}. $
In fact, the two are identical, if we make the identification
 $ \mathbf{k} = \sigma_2 \sigma_1, \mathbf{i} = \sigma_3 \sigma_2, \mathbf{j} = \sigma_1 \sigma_3 $
and it is straightforward to confirm that this preserves the Hamilton relations
 $ \mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{i} \mathbf{j} \mathbf{k} = 1. $
In this picture, quaternions correspond not to vectors but to bivectors^{w} – quantities with magnitude and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbers^{w} becomes clearer, too: in 2D, with two vector directions σ_{1} and σ_{2}, there is only one bivector basis element σ_{1}σ_{2}, so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements σ_{1}σ_{2}, σ_{2}σ_{3}, σ_{3}σ_{1}, so three imaginaries.
The usefulness of quaternions for geometrical computations can be generalised to other dimensions, by identifying the quaternions as the even part Cℓ^{+}_{3,0}(R) of the Clifford algebra^{w} Cℓ_{3,0}(R).
Spinors
 See also: ^{*}Bispinor
External link:An introduction to spinors
Spinors may be regarded as nonnormalised rotors which transform singlesided.^{[16]}
Note: The (real) ^{*}spinors in threedimensions are quaternions, and the action of an evengraded element on a spinor is given by ordinary quaternionic multiplication.^{[17]}
A spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360°. This property characterizes spinors.^{[18]}
In three dimensions...the ^{*}Lie group ^{*}SO(3) is not ^{*}simply connected. Mathematically, one can tackle this problem by exhibiting the ^{*}special unitary group SU(2), which is also the ^{*}spin group in three ^{*}Euclidean dimensions, as a ^{*}double cover of SO(3).
SU(2) is the following group,
 $ \mathrm{SU}(2) = \left \{ \begin{pmatrix} \alpha&\overline{\beta}\\ \beta & \overline{\alpha} \end{pmatrix}: \ \ \alpha,\beta\in\mathbf{C}, \alpha^2 + \beta^2 = 1\right \} ~, $
where the overline denotes ^{*}complex conjugation.
For comparison: Using 2 × 2 complex matrices, the quaternion a + bi + cj + dk can be represented as
 $ \begin{bmatrix} a+bi & c+di \\ (cdi) & abi \end{bmatrix}. $
If X = (x_{1},x_{2},x_{3}) is a vector in R^{3}, then we identify X with the 2 × 2 matrix with complex entries
 $ X=\left(\begin{matrix}x_1&x_2ix_3\\x_2+ix_3&x_1\end{matrix}\right) $
Note that −det(X) gives the square of the Euclidean length of X regarded as a vector, and that X is a ^{*}tracefree, or better, tracezero ^{*}Hermitian matrix.
The unitary group acts on X via
 $ X\mapsto MXM^+ $
where M ∈ SU(2). Note that, since M is unitary,
 $ \det(MXM^+) = \det(X) $, and
 $ MXM^+ $ is tracezero Hermitian.
Hence SU(2) acts via rotation on the vectors X. Conversely, since any ^{*}change of basis which sends tracezero Hermitian matrices to tracezero Hermitian matrices must be unitary, it follows that every rotation also lifts to SU(2). However, each rotation is obtained from a pair of elements M and −M of SU(2). Hence SU(2) is a doublecover of SO(3). Furthermore, SU(2) is easily seen to be itself simply connected by realizing it as the group of unit ^{*}quaternions, a space ^{*}homeomorphic to the ^{*}3sphere.
A unit quaternion has the cosine of half the rotation angle as its scalar part and the sine of half the rotation angle multiplying a unit vector along some rotation axis (here assumed fixed) as its pseudovector (or axial vector) part. If the initial orientation of a rigid body (with unentangled connections to its fixed surroundings) is identified with a unit quaternion having a zero pseudovector part and +1 for the scalar part, then after one complete rotation (2pi rad) the pseudovector part returns to zero and the scalar part has become 1 (entangled). After two complete rotations (4pi rad) the pseudovector part again returns to zero and the scalar part returns to +1 (unentangled), completing the cycle.
The association of a spinor with a 2×2 complex ^{*}Hermitian matrix was formulated by Élie Cartan.^{[19]}
In detail, given a vector x = (x_{1}, x_{2}, x_{3}) of real (or complex) numbers, one can associate the complex matrix
 $ \vec{x} \rightarrow X \ =\left(\begin{matrix}x_3&x_1ix_2\\x_1+ix_2&x_3\end{matrix}\right). $
Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3space:
 det X = – (length x)^{2}.
 X ^{2} = (length x)^{2}I, where I is the identity matrix.
 $ \frac{1}{2}(XY+YX)=({\bold x}\cdot{\bold y})I $ ^{[19]}
 $ \frac{1}{2}(XYYX)=iZ $ where Z is the matrix associated to the cross product z = x × y.
 If u is a unit vector, then −UXU is the matrix associated to the vector obtained from x by reflection in the plane orthogonal to u.
 It is an elementary fact from ^{*}linear algebra that any rotation in 3space factors as a composition of two reflections. (Similarly, any orientation reversing orthogonal transformation is either a reflection or the product of three reflections.) Thus if R is a rotation, decomposing as the reflection in the plane perpendicular to a unit vector u_{1} followed by the plane perpendicular to u_{2}, then the matrix U_{2}U_{1}XU_{1}U_{2} represents the rotation of the vector x through R.
Having effectively encoded all of the rotational linear geometry of 3space into a set of complex 2×2 matrices, it is natural to ask what role, if any, the 2×1 matrices (i.e., the ^{*}column vectors) play. Provisionally, a spinor is a column vector
 $ \xi=\left[\begin{matrix}\xi_1\\\xi_2\end{matrix}\right], $ with complex entries ξ_{1} and ξ_{2}.
The space of spinors is evidently acted upon by complex 2×2 matrices. Furthermore, the product of two reflections in a given pair of unit vectors defines a 2×2 matrix whose action on euclidean vectors is a rotation, so there is an action of rotations on spinors.
Often, the first example of spinors that a student of physics encounters are the 2×1 spinors used in Pauli's theory of electron spin. The ^{*}Pauli matrices are a vector of three 2×2 ^{*}matrices that are used as ^{*}spin ^{*}operators.
Given a ^{*}unit vector in 3 dimensions, for example (a, b, c), one takes a ^{*}dot product with the Pauli spin matrices to obtain a spin matrix for spin in the direction of the unit vector.
The ^{*}eigenvectors of that spin matrix are the spinors for spin1/2 oriented in the direction given by the vector.
Example: u = (0.8, 0.6, 0) is a unit vector. Dotting this with the Pauli spin matrices gives the matrix:
 $ S_u = (0.8,0.6,0.0)\cdot \vec{\sigma}=0.8 \sigma_{1}0.6\sigma_{2}+0.0\sigma_{3} = \begin{bmatrix} 0.0 & 0.8+0.6i \\ 0.80.6i & 0.0 \end{bmatrix} $
The eigenvectors may be found by the usual methods of ^{*}linear algebra, but a convenient trick is to note that a Pauli spin matrix is an ^{*}involutory matrix, that is, the squareof the above matrix is the identity matrix.
Thus a (matrix) solution to the eigenvector problem with eigenvalues of ±1 is simply 1 ± S_{u}. That is,
 $ S_u (1\pm S_u) = \pm 1 (1 \pm S_u) $
One can then choose either of the columns of the eigenvector matrix as the vector solution, provided that the column chosen is not zero. Taking the first column of the above, eigenvector solutions for the two eigenvalues are:
 $ \begin{bmatrix} 1.0+ (0.0)\\ 0.0 +(0.80.6i) \end{bmatrix}, \begin{bmatrix} 1.0 (0.0)\\ 0.0(0.80.6i) \end{bmatrix} $
The trick used to find the eigenvectors is related to the concept of ^{*}ideals, that is, the matrix eigenvectors (1 ± S_{u})/2 are ^{*}projection operators or ^{*}idempotents and therefore each generates an ideal in the Pauli algebra. The same trick works in any ^{*}Clifford algebra, in particular the ^{*}Dirac algebra that are discussed below. These projection operators are also seen in ^{*}density matrix theory where they are examples of pure density matrices.
More generally, the projection operator for spin in the (a, b, c) direction is given by
 $ \frac{1}{2}\begin{bmatrix}1+c&aib\\a+ib&1c\end{bmatrix} $
and any non zero column can be taken as the projection operator. While the two columns appear different, one can use a^{2} + b^{2} + c^{2} = 1 to show that they are multiples (possibly zero) of the same spinor.
 From Wikipedia:Tensor#Spinors:
When changing from one ^{*}orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not ^{*}simply connected (see ^{*}orientation entanglement and ^{*}plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.^{[20]} A ^{*}spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.^{[21]}^{[22]}
Succinctly, spinors are elements of the ^{*}spin representation of the rotation group, while tensors are elements of its ^{*}tensor representations. Other ^{*}classical groups have tensor representations, and so also tensors that are compatible with the group, but all noncompact classical groups have infinitedimensional unitary representations as well.
 From Wikipedia:Spinor:
Quote from Elie Cartan: The Theory of Spinors, Hermann, Paris, 1966: "Spinors...provide a linear representation of the group of rotations in a space with any number $ n $ of dimensions, each spinor having $ 2^\nu $ components where $ n = 2\nu+1 $ or $ 2\nu $." The star (*) refers to Cartan 1913.
(Note: $ \nu $ is the number of ^{*}simultaneous independent rotations an object can have in n dimensions.)
Although spinors can be defined purely as elements of a representation space of the spin group (or its Lie algebra of infinitesimal rotations), they are typically defined as elements of a vector space that carries a linear representation of the Clifford algebra. The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, and in applications the Clifford algebra is often the easiest to work with. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anticommutation relations. The spinors are the column vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, and the twocomponent complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what precisely constitutes a "column vector" (or spinor), involves the choice of basis and gamma matrices in an essential way. As a representation of the spin group, this realization of spinors as (complex) column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of socalled "halfspin" or Weyl representations if the dimension is even.
In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to (angular momenta about) the three coordinate axes. These are 2×2 matrices with complex entries, and the twocomponent complex column vectors on which these matrices act by matrix multiplication are the spinors. In this case, the spin group is isomorphic to the group of 2×2 unitary matrices with determinant one, which naturally sits inside the matrix algebra. This group acts by conjugation on the real vector space spanned by the Pauli matrices themselves, realizing it as a group of rotations among them, but it also acts on the column vectors (that is, the spinors).
 From Wikipedia:Spinor:
In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. More precisely, it is the fermions of spin1/2 that are described by spinors, which is true both in the relativistic and nonrelativistic theory. The wavefunction of the nonrelativistic electron has values in 2 component spinors transforming under threedimensional infinitesimal rotations. The relativistic ^{*}Dirac equation for the electron is an equation for 4 component spinors transforming under infinitesimal Lorentz transformations for which a substantially similar theory of spinors exists.
Functions
From Wikipedia:Function (mathematics)
In mathematics, a function is a ^{*}relation between a ^{*}set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function $ f(x)=x^2 $ that relates each ^{*}real number x to its square x^{2}. The output of a function f corresponding to an input x is denoted by f(x) (read "f of x"). In this example, if the input is −3, then the output is 9, and we may write f(−3) = 9. See Tutorial:Evaluate by Substitution. Likewise, if the input is 3, then the output is also 9, and we may write f(3) = 9. (The same output may be produced by more than one input, but each input gives only one output.) The input ^{*}variable(s) are sometimes referred to as the argument(s) of the function.
Euclids "common notions"
Things that do not differ from one another are equal to one another
a=a 
Things that are equal to the same thing are also equal to one another
If 
 then a=c 
If equals are added to equals, then the wholes are equal
If 
 then a+c=b+d 
If equals are subtracted from equals, then the remainders are equal
If 
 then ac=bd 
The whole is greater than the part.
If  b≠0  then a+b>a 
Elementary algebra
Elementary algebra builds on and extends arithmetic by introducing letters called ^{*}variables to represent general (nonspecified) numbers.
Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition^{w}, subtraction^{w}, multiplication^{w}, division^{w} and exponentiation^{w}). For example,
 Added terms are simplified using coefficients. For example, $ x + x + x $ can be simplified as $ 3x $ (where 3 is a numerical coefficient).
 Multiplied terms are simplified using exponents. For example, $ x \times x \times x $ is represented as $ x^3 $
 Like terms are added together,^{[23]} for example, $ 2x^2 + 3ab  x^2 + ab $ is written as $ x^2 + 4ab $, because the terms containing $ x^2 $ are added together, and, the terms containing $ ab $ are added together.
 Brackets can be "multiplied out", using the distributive property^{w}. For example, $ x (2x + 3) $ can be written as $ (x \times 2x) + (x \times 3) $ which can be written as $ 2x^2 + 3x $
 Expressions can be factored. For example, $ 6x^5 + 3x^2 $, by dividing both terms by $ 3x^2 $ can be written as $ 3x^2 (2x^3 + 1) $
For any function $ f $, if $ a=b $ then:
 $ f(a) = f(b) $
 $ a + c = b + c $
 $ ac = bc $
 $ a^c = b^c $
One must be careful though when squaring both sides of an equation since this can result is solutions that dont satisfy the original equation.
 $ 1 \neq 1 $ yet $ 1^2 = 1^2 $
A function is an even function^{w} if f(x) = f(x)
A function is an odd function^{w} if f(x) = f(x)
Trigonometry
The law of cosines^{w} reduces to the Pythagorean theorem^{w} when gamma=90 degrees
 $ c^2 = a^2 + b^2  2ab\cos\gamma, $
The law of sines^{w} (also known as the "sine rule") for an arbitrary triangle states:
 $ \frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = \frac{abc}{2\Delta}, $
where $ \Delta $ is the area of the triangle
 $ \mbox{Area} = \Delta = \frac{1}{2}a b\sin C. $
The law of tangents^{w}:
 $ \frac{ab}{a+b}=\frac{\tan\left[\tfrac{1}{2}(AB)\right]}{\tan\left[\tfrac{1}{2}(A+B)\right]} $
Right triangles
A right triangle is a triangle with gamma=90 degrees.
For small values of x, sin x ≈ x. (If x is in radians).
SOH → sin = "opposite" / "hypotenuse" CAH → cos = "adjacent" / "hypotenuse" TOA → tan = "opposite" / "adjacent" 
= sin A = a/c = cos A = b/c = tan A = a/b 
 $ \sin x = \frac{e^{ix}  e^{ix}}{2i}, \qquad \cos x = \frac{e^{ix} + e^{ix}}{2}, \qquad \tan x = \frac{i(e^{ix}  e^{ix})}{e^{ix} + e^{ix}}. $
(Note: the expression of tan(x) has i in the numerator, not in the denominator, because the order of the terms (and thus the sign) of the numerator is changed w.r.t. the expression of sin(x).)
Hyperbolic functions
 See also: ^{*}Hyperbolic angle
Hyperbolic functions^{w} are analogs of the ordinary trigonometric, or circular, functions.
 Hyperbolic sine:
 $ \sinh x = \frac {e^x  e^{x}} {2} = \frac {e^{2x}  1} {2e^x} = \frac {1  e^{2x}} {2e^{x}}. $
 Hyperbolic cosine:
 $ \cosh x = \frac {e^x + e^{x}} {2} = \frac {e^{2x} + 1} {2e^x} = \frac {1 + e^{2x}} {2e^{x}}. $
 Hyperbolic tangent:
 $ \tanh x = \frac{\sinh x}{\cosh x} = \frac {e^x  e^{x}} {e^x + e^{x}} = $
 $ = \frac{e^{2x}  1} {e^{2x} + 1} = \frac{1  e^{2x}} {1 + e^{2x}}. $
 Hyperbolic cotangent:
 $ \coth x = \frac{\cosh x}{\sinh x} = \frac {e^x + e^{x}} {e^x  e^{x}} = $
 $ = \frac{e^{2x} + 1} {e^{2x}  1} = \frac{1 + e^{2x}} {1  e^{2x}}, \qquad x \neq 0. $
 Hyperbolic secant:
 $ \operatorname{sech} x = \frac{1}{\cosh x} = \frac {2} {e^x + e^{x}} = $
 $ = \frac{2e^x} {e^{2x} + 1} = \frac{2e^{x}} {1 + e^{2x}}. $
 Hyperbolic cosecant:
 $ \operatorname{csch} x = \frac{1}{\sinh x} = \frac {2} {e^x  e^{x}} = $
 $ = \frac{2e^x} {e^{2x}  1} = \frac{2e^{x}} {1  e^{2x}}, \qquad x \neq 0. $
Areas and volumes
The length of the circumference C of a circle is related to the radius r and diameter d by:
 $ \mathrm{Circumference} = \tau r = 2 \pi r = \pi d $
 where
 $ \pi $ = 3.141592654
 $ \tau $ = 2 * π
The area of a circle is:
 $ \mathrm{Area} = \pi r^2 $
The surface area of a sphere is
 $ \mathrm{\text{Surface area}} = 4 \cdot \pi r^2 $
 The surface area of a sphere 1 unit in radius is:
 $ 4 \pi (1 \text{ unit})^2 = 12.56637 \text{ unit}^2 $
 The surface area of a sphere 128 units in radius is:
 $ 4 \pi (128 \text{ unit})^2 = 205,887 \text{ unit}^2 $
The volume inside a sphere is
 $ \mathrm{Volume} = \frac{4}{3} \cdot \pi r^3 $
 The volume of a sphere 1 unit in radius is:
 $ V = \frac{4}{3} \cdot \pi (1 \text{ unit})^3 = 4.1888 \text{ unit}^3 $
The moment of inertia of a hollow sphere is:
 $ I = \frac{2}{3} m r^2\,\! $
Moment of inertia of a sphere is:
 $ I = \frac{2}{5} m r^2\,\! $
The area of a hexagon is:
 $ Area = \frac{3 \sqrt{3}}{2} a^2 = 2.59807621135 \cdot a^2 $
 where a is the length of any side.
Polynomials
 See also: ^{*}Runge's phenomenon, ^{*}Polynomial ring, ^{*}System of polynomial equations, ^{*}Rational root theorem, ^{*}Descartes' rule of signs, and ^{*}Complex conjugate root theorem
 From Wikipedia:Polynomial:
A polynomial^{w} can always be written in the form
 $ polynomial = Z(x) = a_0 + a_1 x + a_2 x^2 + \dotsb + a_{n1}x^{n1} + a_n x^n $
where $ a_0, \ldots, a_n $ are constants called coefficients and n is the degree^{w} of the polynomial.
 A ^{*}linear polynomial is a polynomial of degree one.
Each individual ^{*}term is the product of the ^{*}coefficient and a variable raised to a nonnegative integer power.
 A ^{*}monomial has only one term.
 A ^{*}binomial has 2 terms.
^{*}Fundamental theorem of algebra:
 Every singlevariable, degree n polynomial with complex coefficients has exactly n complex roots^{w}.
 However, some or even all of the roots might be the same number.
 A root (or zero) of a function is a value of x for which Z(x)=0.
 $ Z(x) = a_n(x  z_1)(x  z_2)\dotsb(x  z_n) $
 If $ Z(x) = (x  z_1)(x  z_2)^k $ then z_{2} is a root of ^{*}multiplicity k.^{[24]} z_{2} is a root of multiplicity k1 of the derivative (Derivative is defined below) of Z(x).
 If k=1 then z_{2} is a simple root.
 The graph is tangent to the x axis at the multiple roots of f and not tangent at the simple roots.
 The graph crosses the xaxis at roots of odd multiplicity and bounces off (not goes through) the xaxis at roots of even multiplicity.
 Near x=z_{2} the graph has the same general shape as $ A(x  z_2)^k $
 The roots of the formula $ ax^2+bx+c=0 $ are given by the Quadratic formula^{w}:
$ x = \frac{b \pm \sqrt{b^2  4ac}}{2a}. $ See Completing the square^{w}
 $ ax^2+bx+c = a(x+\frac{b}{2a})^2+c\frac{b^2}{4a} = a(xh)^2+k $
 This is a parabola shifted to the right h units, stretched by a factor of a, and moved upward k units.
 k is the value at x=h and is either the maximum or the minimum value.
$ (x+y)^n = {n \choose 0}x^n y^0 + {n \choose 1}x^{n1}y^1 + {n \choose 2}x^{n2}y^2 + \cdots + {n \choose n1}x^1 y^{n1} + {n \choose n}x^0 y^n, $
 Where $ \binom{n}{k} = \frac{n!}{k! (nk)!}. $ See Binomial coefficient^{w}
$ x^2  y^2 = (x + y)(x  y) $
$ x^2 + y^2 = (x + yi)(x  yi) $
The polynomial remainder theorem^{w} states that the remainder of the division of a polynomial Z(x) by the linear polynomial xa is equal to Z(a). See ^{*}Ruffini's rule.
Determining the value at Z(a) is sometimes easier if we use ^{*}Horner's method (^{*}synthetic division) by writing the polynomial in the form
 $ Z(x) = a_0 + x(a_1 + x(a_2 + \cdots + x(a_{n1} + x(a_n)))). $
A ^{*}monic polynomial is a one variable polynomial in which the leading coefficient is equal to 1.
 $ a_0 + a_1x + a_2x^2 + \cdots + a_{n1}x^{n1} + 1x^n $
Rational functions
A ^{*}rational function is a function of the form
 $ f(x) = k{(x  z_1)(x  z_2)\dotsb(x  z_n) \over (x  p_1)(x  p_2)\dotsb(x  p_m)} = {Z(x) \over P(x)} $
It has n zeros^{w} and m poles^{w}. A pole is a value of x for which f(x) = infinity.
 The vertical asymptotes^{w} are the poles of the rational function.
 If n<m then f(x) has a horizontal asymptote at the x axis
 If n=m then f(x) has a horizontal asymptote at k.
 If n>m then f(x) has no horizontal asymptote.
 See also ^{*}Wikipedia:Asymptote#Oblique_asymptotes
 Given two polynomials $ Z(x) $ and $ P(x) = (xp_1)(xp_2) \cdots (xp_m) $, where the p_{i} are distinct constants and deg Z < m, partial fractions^{w} are generally obtained by supposing that
 $ \frac{Z(x)}{P(x)} = \frac{c_1}{xp_1} + \frac{c_2}{xp_2} + \cdots + \frac{c_m}{xp_m} $
 and solving for the c_{i} constants, by substitution, by ^{*}equating the coefficients of terms involving the powers of x, or otherwise.
 (This is a variant of the ^{*}method of undetermined coefficients.)^{[25]}
 If the degree of Z is not less than m then use long division to divide P into Z. The remainder then replaces Z in the equation above and one proceeds as before.
 If $ P(x) = (xp)^m $ then $ \frac{Z(x)}{P(x)} = \frac{c_1}{(xp)} + \frac{c_2}{(xp)^2} + \cdots + \frac{c_m}{(xp)^m} $
A ^{*}Generalized hypergeometric series is given by
 $ \sum_{x=0} c_x $ where c_{0}=1 and $ {c_{x+1} \over c_x} = {Z(x) \over P(x)} = f(x) $
The function f(x) has n zeros and m poles.
 ^{*}Basic hypergeometric series, or hypergeometric qseries, are ^{*}qanalogue generalizations of generalized hypergeometric series.^{[26]}
 Roughly speaking a ^{*}qanalog of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as q → 1^{[27]}
 We define the qanalog of n, also known as the qbracket or qnumber of n, to be
 $ [n]_q=\frac{1q^n}{1q} = q^0 + q^1 + q^2 + \ldots + q^{n  1} $
 one may define the qanalog of the factorial^{w}, known as the ^{*}qfactorial, by
 $ [n]_q! = [1]_q \cdot [2]_q \cdots [n1]_q \cdot [n]_q $
 ^{*}Elliptic hypergeometric series are generalizations of basic hypergeometric series.
 An elliptic function is a meromorphic function that is periodic in two directions.
A ^{*}generalized hypergeometric function is given by
 $ F(x) = {}_nF_m(z_1,...z_n;p_1,...p_m;x) = \sum_{y=0} c_y x^y $
So for e^{x} (see below) we have:
 $ c_y = \frac{1}{y!}, \qquad \frac{c_{y+1}}{c_y} = \frac{1}{y+1}. $
Integration and differentiation
 See also: Hyperreal number^{w} and Implicit differentiation^{w}
The integral^{w} is a generalization of multiplication.
 For example: a unit mass dropped from point x_{2} to point x_{1} will release energy.
 The usual equation is is a simple multiplication:
 $ gravity \cdot (x_2  x_1) = energy $
 But that equation cant be used if the strength of gravity is itself a function of x.
 The strength of gravity at x_{1} would be different than it is at x_{2}.
 And in reality gravity really does depend on x (x is the distance from the center of the earth):
 $ gravity(x) = 1/x^2 $ (See inversesquare law^{w}.)
 However, the corresponding Definite integral^{w} is easily solved:
 $ \int_{x_1}^{x_2} gravity(x) \cdot dx $
The surprisingly simple rules for solving definite integrals  $ \int_{x_1}^{x_2} f(x) \cdot dx \quad = \quad F(x_2)F(x_1) $
F(x) is called the indefinite integral^{w}. (antiderivative^{w})
 $ F(x) = \int f(x) \cdot dx $
k and y are arbitrary constants:
 $ \int k \cdot x^y \cdot dx \quad = \quad k \cdot \int x^y \cdot dx \quad = \quad k \cdot \frac{x^{y+1}}{y+1} $
(Units (feet, mm...) behave exactly like constants.)
And most conveniently :
 $ \int \bigg (f(x) + g(x) \bigg) \cdot dx = \int f(x) \cdot dx + \int g(x) \cdot dx $
 The integral of a function is equal to the area under the curve.
 When the "curve" is a constant (in other words, k•x^{0}) then the integral reduces to ordinary multiplication.
The derivative^{w} is a generalization of division.
The derivative of the integral of f(x) is just f(x).
The derivative of a function at any point is equal to the slope of the function at that point.
 $ f'(x)=\frac{f(x+dx)f(x)}{dx}. $
The equation of the line tangent to a function at point a is
 $ y(x) = f(a) + f'(a)(xa) $
The Lipschitz constant^{w} of a function is a real number for which the absolute value of the slope of the function at every point is not greater than this real number.
The derivative of f(x) where f(x) = k•x^{y} is
 $ f'(x) = {df \over dx} = {d(k \cdot x^y) \over dx} \quad = \quad k \cdot {d(x^y) \over dx} \quad = \quad k \cdot y \cdot x^{y1} $
 The derivative of a $ k \cdot x^0 $ is $ k \cdot 0 \cdot x^{1} $
 The integral of $ x^{1} $ is ln(x)^{[28]}. See natural log^{w}
Chain rule^{w} for the derivative of a function of a function:
 $ f(g(x))' = \frac{df}{dx} = \frac{df}{dg} \cdot \frac{dg}{dx} $
The Chain rule for a function of 2 functions:
 $ f(g(x), h(x))' = \frac{\operatorname df}{\operatorname dx} = { \partial f \over \partial g}{\operatorname dg \over \operatorname dx} + {\partial f \over \partial h}{\operatorname dh \over \operatorname dx } $ (See "partial derivatives" below)
The Product rule^{w} can be considered a special case of the chain rule^{w} for several variables^{[29]}
 $ \frac{df}{dx} = {d (g(x) \cdot h(x)) \over dx} = \frac{\partial(g \cdot h)}{\partial g}\frac{dg}{dx}+\frac{\partial (g \cdot h)}{\partial h}\frac{dh}{dx} = \frac{dg}{dx} h + g \frac{dh}{dx} $
Product rule^{w}:
 $ (g \cdot h)' = \frac{(g+dg) \cdot (h+dh)  g \cdot h}{dx} = g' \cdot h + g \cdot h' $ (because $ dh \cdot dg $ is negligible)
 $ (g \cdot h \cdot j)' = g' \cdot h \cdot j + g \cdot h' \cdot j + g \cdot h \cdot j' $
^{*}General Leibniz rule:
 $ (gh)^{(n)}=\sum_{k=0}^n {n \choose k} g^{(nk)} h^{(k)} $
By the chain rule:
 $ \bigg(\frac{1}{h}\bigg)' = \frac{1}{h^2} \cdot h' $
Therefore the Quotient rule^{w}:
 $ \bigg( \frac{g(x)}{h(x)} \bigg)' = \bigg( g \cdot \frac{1}{h} \bigg)' = g' \cdot \frac{1}{h} + g \cdot \frac{h'}{h^2} = \frac{g' \cdot h  g \cdot h'}{h^2} $
There is a chain rule for integration but the inner function must have the form $ g=ax+c $ so that its derivative $ \frac{dg}{dx} = a $ and therefore $ dx=\frac{dg}{a} $
 $ \int f(g(x)) \cdot dx = \int f(g) \cdot \frac{dg}{a} = \frac{1}{a} \int f(g) \cdot dg $
Actually the inner function can have the form $ g=ax^y+c $ so that its derivative $ \frac{dg}{dx} = a \cdot y \cdot x^{y1} $ and therefore $ dx=\frac{dg}{a \cdot y \cdot x^{y1}} $ provided that all factors involving x cancel out.
 $ \int x^{y1} \cdot f(g(x)) \cdot dx = \int {\color{red} x^{y1}} \cdot f(g) \cdot \frac{dg}{a \cdot y \cdot {\color{red} x^{y1}}} = \frac{1}{a \cdot y} \int f(g) \cdot dg $
The product rule for integration is called Integration by parts^{w}
 $ g \cdot h' = (g \cdot h)'  g' \cdot h $
 $ \int g \cdot h' \cdot dx = g \cdot h  \int g' \cdot h \cdot dx $
One can use partial fractions^{w} or even the Taylor series^{w} to convert difficult integrals into a more manageable form.
 $ \frac{f(x)}{(x1)^2} = \frac{a_0(x1)^0 + a_1(x1)^1 + \dots + a_n(x1)^n}{(x1)^2} $
The fundamental theorem of Calculus is:
 $ F(x)  F(a) = \int_a^x\!f(t)\, dt \quad \text{and} \quad F'(x) = f(x) $
The fundamental theorem of calculus is just the particular case of the ^{*}Leibniz integral rule:
 $ \frac{d}{dx} \left (\int_{a(x)}^{b(x)}f(x,t)\,dt \right) = f\big(x,b(x)\big)\cdot \frac{d}{dx} b(x)  f\big(x,a(x)\big)\cdot \frac{d}{dx} a(x) + \int_{a(x)}^{b(x)}\frac{\partial}{\partial x} f(x,t) \,dt. $
In calculus, a function f defined on a subset of the real numbers with real values is called ^{*}monotonic if and only if it is either entirely nonincreasing, or entirely nondecreasing.^{[30]}
A differential form^{w} is a generalisation of the notion of a differential^{w} that is independent of the choice of ^{*}coordinate system. f(x,y) dx ∧ dy is a 2form in 2 dimensions (an area element). The derivative^{w} operation on an nform is an n+1form; this operation is known as the exterior derivative^{w}. By the generalized Stokes' theorem^{w}, the integral of a function over the boundary of a manifold^{w} is equal to the integral of its exterior derivative on the manifold itself.
Taylor & Maclaurin series
If we know the value of a smooth function^{w} at x=0 (smooth means all its derivatives are continuous^{w}) and we also know the value of all of its derivatives at x=0 then we can determine the value at any other point x by using the Maclaurin series^{w}. ("!" means factorial^{w})
 $ a_0 x^0 + a_1 x^1 + a_2 x^2 + a_3 x^3 \cdots \quad \text{where} \quad a_n = {f^{(n)}(0) \over n!} $
The proof of this is actually quite simple. Plugging in a value of x=0 causes all terms but the first to become zero. So, assuming that such a function exists, a_{0} must be the value of the function at x=0. Simply differentiate both sides of the equation and repeat for the next term. And so on.
 The Taylor series^{w} generalizes this formula.
 $ f(z)=\sum_{k=0}^\infty \alpha_k (zz_0)^k $
 An analytic function^{w} is a function whose Taylor series converges for every z_{0} in its domain^{w}; analytic functions are infinitely differentiable^{w}.
 Any vector g = (z_{0}, α_{0}, α_{1}, ...) is a ^{*}germ if it represents a power series of an analytic function^{w} around z_{0} with some radius of convergence r > 0.
 The set of germs $ \mathcal G $ is a Riemann surface^{w}.
 Riemann surfaces are the objects on which multivalued functions become singlevalued.
 A ^{*}connected component of $ \mathcal G $ (i.e., an equivalence class) is called a ^{*}sheaf.
We can easily determine the Maclaurin series expansion of the exponential function^{w} $ e^x $ (because it is equal to its own derivative).^{[28]}
 $ e^x = \sum_{n = 0}^{\infty} {x^n \over n!} = {x^0 \over 0!} + {x^1 \over 1!} + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots $
 The above holds true even if x is a matrix. See ^{*}Matrix exponential
And cos(x)^{w} and sin(x)^{w} (because cosine is the derivative of sine which is the derivative of cosine)
 $ \cos x = \frac{x^0}{0!}  \frac{x^2}{2!} + \frac{x^4}{4!}  \frac{x^6}{6!} + \cdots $
 $ \sin x = \frac{x^1}{1!}  \frac{x^3}{3!} + \frac{x^5}{5!}  \frac{x^7}{7!} + \cdots $
It then follows that $ e^{ix}=\cos x+i\sin x=\operatorname{cis} x $ and therefore $ e^{i \pi}=1 + i\cdot0 $ See Euler's formula^{w}
 x is the angle in ^{*}radians.
 This makes the equation for a circle in the complex plane, and by extension sine and cosine, extremely simple and easy to work with especially with regard to differentiation and integration.
 $ \frac{d(e^{i \cdot k \cdot t})}{dt} = i \cdot k \cdot e^{i \cdot k \cdot t} $
 Differentiation and integration are replaced with multiplication and division. Calculus is replaced with algebra. Therefore any expression that can be represented as a sum of sine waves can be easily differentiated or integrated.
Fourier Series
The Maclaurin series cant be used for a discontinuous function like a square wave because it is not differentiable. (^{*}Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. See ^{*}Generalized function.)
But remarkably we can use the Fourier series^{w} to expand it or any other periodic function^{w} into an infinite sum of sine waves each of which is fully differentiable^{w}!
 $ f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty \left[a_n\cos\left(nt\right)+b_n\sin\left(nt\right)\right] $
 $ a_n = \frac{2}{p}\int_{t_0}^{t_p} f(t)\cdot \cos\left(\tfrac{2\pi nt}{p}\right)\ dt $
 $ b_n = \frac{2}{p}\int_{t_0}^{t_p} f(t)\cdot \sin\left(\tfrac{2\pi nt}{p}\right)\ dt $
 The reason this works is because sine and cosine are ^{*}orthogonal functions.
 $ \langle sin,cos\rangle=0. $
 That means that multiplying any 2 sine waves of frequency n and frequency m and integrating over one period will always equal zero unless n=m.
 See the graph of sin^{2}(x) to the right.
 $ \sin mx \cdot \sin nx = \frac{\cos (m  n)x  \cos (m+n) x}{2} $
 See ^{*}Amplitude_modulation
 And of course ∫ f_{n}*(f_{1}+f_{2}+f_{3}+...) = ∫ (f_{n}*f_{1}) + ∫ (f_{n}*f_{2}) + ∫ (f_{n}*f_{3}) +...
 The complex form of the Fourier series uses complex exponentials instead of sine and cosine and uses both positive and negative frequencies (clockwise and counter clockwise) whose imaginary parts cancel.
 The complex coefficients encode both amplitude and phase and are complex conjugates of each other.
 $ F(\nu) = \mathcal{F}\{f\} = \int_{\mathbb{R}^n} f(x) e^{2 \pi i x\cdot\nu} \, \mathrm{d}x $
 where the dot between x and ν indicates the inner product^{w} of R^{n}.
 A 2 dimensional Fourier series is used in video compression.
 A ^{*}discrete Fourier transform can be computed very efficiently by a ^{*}fast Fourier transform.
 In mathematical analysis, many generalizations of Fourier series have proven to be useful.
 They are all special cases of decompositions over an orthonormal basis of an inner product space.^{[31]}
 ^{*}Spherical harmonics are a complete set of orthogonal functions on the sphere, and thus may be used to represent functions defined on the surface of a sphere, just as circular functions (sines and cosines) are used to represent functions on a circle via Fourier series.^{[32]}
 Spherical harmonics are ^{*}basis functions for SO(3). See Laplace series^{w}.
 Every continuous function in the function space can be represented as a ^{*}linear combination of basis functions, just as every vector in a vector space can be represented as a linear combination of basis vectors.
 Every quadratic polynomial can be written as a1+bt+ct^{2}, that is, as a linear combination of the basis functions 1, t, and t^{2}.
Transforms
Fourier transforms^{w} generalize Fourier series to nonperiodic functions like a single pulse of a square wave.
The more localized in the time domain (the shorter the pulse) the more the Fourier transform is spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle^{w}.
The Fourier transform of the Dirac delta function^{w} gives G(f)=1
 $ G(\omega)=\mathcal{F}\{f(t)\}=\int_{\infty}^\infty f(t) e^{i\omega t}dt $
 Laplace transforms^{w} generalize Fourier transforms to complex frequency $ s=\sigma+i\omega $.
 Complex frequency includes a term corresponding to the amount of damping.
 $ F(s)=\mathcal{L}\{f(t)\}=\int_0^\infty f(t) e^{\sigma t}e^{i \omega t}dt $
 $ \mathcal{L}\{ \delta(ta) \} = e^{as} $, (assuming a > 0)
 $ \mathcal{L}\{e^{at} \}= \frac{1}{s  a} $
 The inverse Laplace transform^{w} is given by
 $ f(t) = \mathcal{L}^{1} \{F\} = \frac{1}{2\pi i}\lim_{T\to\infty}\int_{\gammaiT}^{\gamma+iT}F(s)e^{st}\,ds, $
 where the integration is done along the vertical line Re(s) = γ in the complex plane^{w} such that γ is greater than the real part of all ^{*}singularities of F(s) and F(s) is bounded on the line, for example if contour path is in the ^{*}region of convergence.
 If all singularities are in the left halfplane, or F(s) is an ^{*}entire function , then γ can be set to zero and the above inverse integral formula becomes identical to the ^{*}inverse Fourier transform.^{[33]}
 Integral transforms^{w} generalize Fourier transforms to other kernals^{w} (besides sine^{w} and cosine^{w})
 Cauchy kernel =$ \frac{1}{\zetax} \quad \text{or} \quad \frac{1}{2\pi i} \cdot \frac{1}{\zetax} $
 Hilbert kernel = $ cot\frac{\thetat}{2} $
 Poisson Kernel:
 For the ball of radius r, $ B_{r} $, in R^{n}, the Poisson kernel takes the form:
 $ P(x,\zeta) = \frac{r^2x^2}{r} \cdot \frac{1}{\zetax^n} \cdot \frac{1}{\omega_{n}} $
 where $ x\in B_{r} $, $ \zeta\in S $ (the surface of $ B_{r} $), and $ \omega _{n} $ is the ^{*}surface area of the unit nsphere.
 unit disk (r=1) in the complex plane:^{[34]}
 $ K(x,\phi) = \frac{1^2x^2}{1} \cdot \frac{1}{e^{i\phi}x^2}\cdot \frac{1}{2\pi} $
 Dirichlet kernel
 $ D_n(x)=\sum_{k=n}^n e^{ikx}=1+2\sum_{k=1}^n\cos(kx)=\frac{\sin\left(\left(n + \frac{1}{2}\right) x \right)}{\sin(\frac{x}{2})} \approx 2\pi\delta(x) $
The ^{*}convolution theorem states that^{[35]}
 $ \mathcal{F}\{f*g\} = \mathcal{F}\{f\} \cdot \mathcal{F}\{g\} $
where $ \cdot $ denotes pointwise multiplication. It also works the other way around:
 $ \mathcal{F}\{f \cdot g\}= \mathcal{F}\{f\}*\mathcal{F}\{g\} $
By applying the inverse Fourier transform $ \mathcal{F}^{1} $, we can write:
 $ f*g= \mathcal{F}^{1}\big\{\mathcal{F}\{f\}\cdot\mathcal{F}\{g\}\big\} $
and:
 $ f \cdot g= \mathcal{F}^{1}\big\{\mathcal{F}\{f\}*\mathcal{F}\{g\}\big\} $
This theorem also holds for the Laplace transform^{w}.
The ^{*}Hilbert transform is a ^{*}multiplier operator. The multiplier of H is σ_{H}(ω) = −i sgn(ω) where sgn is the ^{*}signum function. Therefore:
 $ \mathcal{F}(H(u))(\omega) = (i\,\operatorname{sgn}(\omega)) \cdot \mathcal{F}(u)(\omega) $
where $ \mathcal{F} $ denotes the Fourier transform^{w}.
Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of $ \mathcal{F} $.
By Euler's formula^{w},
 $ \sigma_H(\omega) = \begin{cases} i = e^{+\frac{i\pi}{2}}, & \text{for } \omega < 0\\ 0, & \text{for } \omega = 0\\ i = e^{\frac{i\pi}{2}}, & \text{for } \omega > 0 \end{cases} $
Therefore, H(u)(t) has the effect of shifting the phase of the ^{*}negative frequency components of u(t) by +90° (π/2 radians) and the phase of the positive frequency components by −90°.
And i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation.
In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear timeinvariant system (LTI).
At any given moment, the output is an accumulated effect of all the prior values of the input function
Differential equations
 See also: ^{*}Variation of parameters
^{*}Simple harmonic motion of a mass on a spring is a secondorder linear ordinary differential equation^{w}.
 $ Force = mass*acc = m\frac{\mathrm{d}^2 x}{\mathrm{d}t^2} = kx, $
where m is the inertial mass, x is its displacement from the equilibrium, and k is the spring constant.
Solving for x produces
 $ x(t) = A\cos\left(\omega t  \varphi\right), $
A is the amplitude (maximum displacement from the equilibrium position), $ \omega = 2\pi f = \sqrt{k/m} $ is the angular frequency^{w}, and φ is the phase.
Energy passes back and forth between the potential energy in the spring and the kinetic energy of the mass.
The important thing to note here is that the frequency of the oscillation depends only on the mass and the stiffness of the spring and is totally independent of the amplitude.
That is the defining characteristic of resonance.
^{*}Kirchhoff's voltage law states that the sum of the emfs in any closed loop of any electronic circuit is equal to the sum of the ^{*}voltage drops in that loop.^{[36]}
 $ V(t) = V_R + V_L + V_C $
V is the voltage, R is the resistance, L is the inductance, C is the capacitance.
 $ V(t) = RI(t) + L \frac{dI(t)}{dt} + \frac{1}{C} \int_{0}^t I(\tau)\, d\tau $
I = dQ/dt is the current.
It makes no difference whether the current is a small number of charges moving very fast or a large number of charges moving slowly.
In reality ^{*}the latter is the case.
If V(t)=0 then the only solution to the equation is the transient response which is a rapidly decaying sine wave with the same frequency as the resonant frequency of the circuit.
 Like a mass (inductance) on a spring (capacitance) the circuit will resonate at one frequency.
 Energy passes back and forth between the capacitor and the inductor with some loss as it passes through the resistor.
If V(t)=sin(t) from ∞ to +∞ then the only solution is a sine wave with the same frequency as V(t) but with a different amplitude and phase.
If V(t) is zero until t=0 and then equals sin(t) then I(t) will be zero until t=0 after which it will consist of the steady state response plus a transient response.
From Wikipedia:Characteristic equation (calculus):
Starting with a linear homogeneous differential equation with constant coefficients $ a_{n}, a_{n1}, \ldots , a_{1}, a_{0} $,
 $ a_{n}y^{(n)} + a_{n1}y^{(n1)} + \cdots + a_{1}y^\prime + a_{0}y = 0 $
it can be seen that if $ y(x) = e^{rx} \, $, each term would be a constant multiple of $ e^{rx} \, $. This results from the fact that the derivative of the exponential function^{w} $ e^{rx} \, $ is a multiple of itself. Therefore, $ y' = re^{rx} \, $, $ y'' = r^{2}e^{rx} \, $, and $ y^{(n)} = r^{n}e^{rx} \, $ are all multiples. This suggests that certain values of $ r \, $ will allow multiples of $ e^{rx} \, $ to sum to zero, thus solving the homogeneous differential equation. In order to solve for $ r \, $, one can substitute $ y = e^{rx} \, $ and its derivatives into the differential equation to get
 $ a_{n}r^{n}e^{rx} + a_{n1}r^{n1}e^{rx} + \cdots + a_{1}re^{rx} + a_{0}e^{rx} = 0 $
Since $ e^{rx} \, $ can never equate to zero, it can be divided out, giving the characteristic equation
 $ a_{n}r^{n} + a_{n1}r^{n1} + \cdots + a_{1}r + a_{0} = 0 $
By solving for the roots, $ r \, $, in this characteristic equation, one can find the general solution to the differential equation. For example, if $ r \, $ is found to equal to 3, then the general solution will be $ y(x) = ce^{3x} \, $, where $ c \, $ is an arbitrary constant^{w}.
Partial derivatives
 See also: ^{*}Currying
^{*}Partial derivatives and ^{*}multiple integrals generalize derivatives and integrals to multiple dimensions.
The partial derivative with respect to one variable $ \frac{\part f(x,y)}{\part x} $ is found by simply treating all other variables as though they were constants.
Multiple integrals are found the same way.
Let f(x, y, z) be a scalar function^{w} (for example electric potential energy or temperature).
 A 2 dimensional example of a scalar function would be an elevation map.
 (Contour lines of an elevation map are an example of a ^{*}level set.)
The total derivative^{w} of f(x(t), y(t)) with respect to t is^{[37]}
 $ \frac{\operatorname df}{\operatorname dt} = { \partial f \over \partial x}{\operatorname dx \over \operatorname dt} + {\partial f \over \partial y}{\operatorname dy \over \operatorname dt } $
And the differential^{w} is
 $ \operatorname df = { \partial f \over \partial x}\operatorname dx + {\partial f \over \partial y} \operatorname dy . $
Gradient of scalar field
The Gradient^{w} of f(x, y, z) is a vector field whose value at each point is a vector (technically its a covector^{w} because it has units of distance^{−1}) that points "downhill" with a magnitude equal to the slope^{w} of the function at that point.
You can think of it as how much the function changes per unit distance.
The gradient of temperature gives heat flow.
 $ \operatorname{grad}(f) = \nabla f = \frac{\partial f}{\partial x} \mathbf{i} + \frac{\partial f}{\partial y} \mathbf{j} + \frac{\partial f}{\partial z} \mathbf{k} = \mathbf{F} $
For static (unchanging) fields the Gradient of the electric potential is the electric field^{w} itself. Image below shows the potential of a single point charge.
Its gradient gives the electric field which is shown in the 2 images below. In the image on the left the field strength is proportional to the length of the vectors. In the image on the right the field strength is proportional to the density of the ^{*}flux lines. The image is 2 dimensional and therefore the flux density in the image follows an inverse first power law but in reality the field lines from a real proton or electron spread outward in 3 dimensions and therefore follow an inverse square law. Inverse square means that at twice the distance the field is four times weaker.
The field of 2 point charges is simply the linear sum of the separate charges.
Divergence
The Divergence^{w} of a vector field is a scalar.
The divergence of the electric field is nonzero wherever there is electric charge^{w} and zero everywhere else.
^{*}Field lines begin and end at charges because the charges create the electric field.
 $ \operatorname{div}\,\mathbf{F} = {\color{red} \nabla\cdot\mathbf{F} } = \left( \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z} \right) \cdot (F_x,F_y,F_z) = \frac{\partial F_x}{\partial x} +\frac{\partial F_y}{\partial y} +\frac{\partial F_z}{\partial z}. $
The Laplacian^{w} is the divergence of the gradient of a function:
 $ \Delta f = \nabla^2 f = (\nabla \cdot \nabla) f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} + \frac{\partial^2 f}{\partial z^2}. $
 ^{*}elliptic operators generalize the Laplacian.
Curl
 See also: ^{*}Biot–Savart law
The Curl^{w} of a vector field describes how much the vector field is twisted.
(The field may even go in circles.)
The curl at a certain point of a magnetic field^{w} is the current^{w} vector at that point because current creates the magnetic field^{w}.
In 3 dimensions the dual of the current vector is a bivector.
 $ \text{curl} (\mathbf{F}) = {\color{blue} \nabla \times \mathbf{F} } = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ {\frac{\partial}{\partial x}} & {\frac{\partial}{\partial y}} & {\frac{\partial}{\partial z}} \\ F_x & F_y & F_z \end{vmatrix} $
 $ \text{curl}( \mathbf{F}) = \left(\frac{\partial F_z}{\partial y}  \frac{\partial F_y}{\partial z}\right) \mathbf{i} + \left(\frac{\partial F_x}{\partial z}  \frac{\partial F_z}{\partial x}\right) \mathbf{j} + \left(\frac{\partial F_y}{\partial x}  \frac{\partial F_x}{\partial y}\right) \mathbf{k} $
In 2 dimensions this reduces to a single scalar
 $ \text{curl}( \mathbf{F}) = \left(\frac{\partial F_y}{\partial x}  \frac{\partial F_x}{\partial y}\right) $
The curl of the gradient of any scalar field is always zero.
The curl of a vector field in 4 dimensions would no longer be a vector. It would be a bivector. However the curl of a bivector field in 4 dimensions would still be a vector.
See also: ^{*}differential forms.
Gradient of vector field
The Gradient^{w} of a vector field is a tensor field. Each row is the gradient of the corresponding scalar function:
 $ \nabla \mathbf{F} = \begin{bmatrix} \frac{\partial}{\partial x} \mathbf{e}_x, \frac{\partial}{\partial y} \mathbf{e}_y, \frac{\partial}{\partial z} \mathbf{e}_z \end{bmatrix} \begin{bmatrix} f_x \mathbf{e}_x \\ f_y \mathbf{e}_y \\ f_z \mathbf{e}_z \end{bmatrix} = \begin{bmatrix} {\color{red} \frac{\partial f_x}{\partial x} \mathbf{e}_{xx} } & {\color{blue} \frac{\partial f_x}{\partial y} \mathbf{e}_{xy} } & {\color{blue} \frac{\partial f_x}{\partial z} \mathbf{e}_{xz} } \\ {\color{blue} \frac{\partial f_y}{\partial x} \mathbf{e}_{yx} } & {\color{red} \frac{\partial f_y}{\partial y} \mathbf{e}_{yy} } & {\color{blue} \frac{\partial f_y}{\partial z} \mathbf{e}_{yz} } \\ {\color{blue} \frac{\partial f_z}{\partial x} \mathbf{e}_{zx} } & {\color{blue} \frac{\partial f_z}{\partial y} \mathbf{e}_{zy} } & {\color{red} \frac{\partial f_z}{\partial z} \mathbf{e}_{zz} } \end{bmatrix} $
 Remember that $ \mathbf{e}_{xy} =  \mathbf{e}_{yx} $ because rotation from y to x is the negative of rotation from x to y.
Partial differential equations can be classified as ^{*}parabolic, ^{*}hyperbolic and ^{*}elliptic.
Green's theorem
The line integral^{w} along a 2D vector field is:
 $ \int (V_1 \cdot dx + V_2 \cdot dy) = \int_a^b \bigg [V_1(x(t),y(t)) \frac{dx}{dt} + V_2(x(t),y(t)) \frac{dy}{dt} \bigg ] dt $

Green's theorem^{w} states that if you want to know how many field lines cross (or run parallel to) the boundary of a given region then you can either perform a line integral or you can simply count the number of charges (or the amount of current) within that region. See Divergence theorem^{w}
 $ {\scriptstyle S } $ $ \vec{F} \cdot \ \mathrm{d} \vec{s} = \iiint_D \nabla \cdot \vec{F} \,\mathrm{d}V = \iiint_D \nabla^2 f \,\mathrm{d}V $
In 2 dimensions this is
 $ \oint_S \vec{F} \cdot \vec{n} \ \mathrm{d} s = \iint_D \nabla \cdot \vec{F} \ \mathrm{d} A= \iint_D \nabla^2 f \ \mathrm{d} A $
Green's theorem is perfectly obvious when dealing with vector fields but is much less obvious when applied to complex valued functions in the complex plane.
The complex plane
 Highly recomend: Fundamentals of complex analysis with applications to engineering and science by Saff and Snider
 External link: http://www.solitaryroad.com/c606.html
The formula for the derivative of a complex function f at a point z_{0} is the same as for a real function:
 $ f'(z_0) = \lim_{z \to z_0} {f(z)  f(z_0) \over z  z_0 }. $
Every complex function can be written in the form $ f(z)=f(x+iy)=f_x(x,y)+i f_y(x,y) $
Because the complex plane is two dimensional, z can approach z_{0} from an infinite number of different directions.
However, if within a certain region, the function f is holomorphic^{w} (that is, complex differentiable^{w}) then, within that region, it will only have a single derivative whose value does not depend on the direction in which z approaches z_{0} despite the fact that f_{x} and f_{y} each have 2 partial derivatives. One in the x and one in the y direction..

 $ {d^2f \over dz^2} \quad = \quad {\part^2 f_x \over \part x^2} + i {\part^2 f_y \over \part x^2} \quad = \quad {\part^2 f_y \over \part y \part x}  i {\part^2 f_x \over \part y \part x} $
This is only possible if the Cauchy–Riemann conditions^{w} are true.
 $ \frac{\part f_x}{\part x}=\frac{\part f_y}{\part y}\ ,\ \quad \frac{\part f_y}{\part x}=\frac{\part f_x}{\part y} $
An ^{*}entire function, also called an integral function, is a complexvalued function that is holomorphic at all finite points over the whole complex plane.
As with real valued functions, a line integral of a holomorphic function depends only on the starting point and the end point and is totally independant of the path taken.
 $ \int f(z) \cdot dz = \int (f_x \cdot dx  f_y \cdot dy) + i \int (f_y \cdot dx + f_x \cdot dy) $
 $ \int f(z) \cdot dz = F(z) = \int_0^t f(z(t)) \cdot \frac{dz}{dt} \cdot dt $
 $ \int_a^b f(z) \cdot dz = F(b)  F(a) $
The starting point and the end point for any loop are the same. This, of course, implies Cauchy's integral theorem^{w} for any holomorphic function f:
$ \oint f(z) \, dz = \iint \left( \frac{ \partial f_x}{\partial y} + \frac{ \partial f_y}{\partial x} \right) dx \, dy + i \iint \left( \frac{\partial f_x}{\partial x} + \frac{ \partial f_y}{\partial y} \right) \, dx \, dy = 0 $
 $ \oint f(z) \, dz = \iint \left( {\color{blue} \nabla \times \bar{f}} + i {\color{red} \nabla \cdot \bar{f}} \right) \, dx \, dy = 0 $
Therefore curl and divergence must both be zero for a function to be holomorphic.
Green's theorem^{w} for functions (not necessarily holomorphic) in the complex plane:
$ \oint f(z) \, dz = 2i \iint \left( df/d\bar{z} \right) \, dx \, dy = i \iint \left( \nabla f \right) \, dx \, dy = i \iint \left( 1 {\partial f \over \partial x} + i {\partial f \over \partial y} \right) \, dx \, dy $
Computing the residue^{w} of a monomial^{[38]}
 $ \begin{align} \oint_C (zz_0)^n dz = \int_0^{2\pi} e^{in \theta} \cdot i e^{i \theta} d \theta = i \int_0^{2\pi} e^{i (n+1) \theta} d\theta = \begin{cases} 2\pi i & \text{if } n = 1 \\ 0 & \text{otherwise} \end{cases} \end{align} $
 where $ C $ is the circle with radius $ 1 $ therefore $ z \to e^{i\theta} $ and $ dz \to d(e^{i\theta}) = ie^{i\theta}d\theta $
 $ \oint_{C_r}\frac{f(z)}{zz_0}dz = \oint_{C_r}\frac{f(z_0)}{zz_0}dz + \oint_{C_r}\frac{f(z)f(z_0)}{zz_0}dz = f(z_0)2\pi i + 0 $
The last term in the equation above equals zero when r=0. Since its value is independent of r it must therefore equal zero for all values of r.
 $ \bigg  \int_\Gamma f(z) \cdot dz \bigg  \leq Max(f(z)) \cdot length(\Gamma) $
Cauchy's integral formula^{w} states that the value of a holomorphic function within a disc is determined entirely by the values on the boundary of the disc.
Divergence can be nonzero outside the disc.
Cauchy's integral formula can be generalized to more than two dimensions.
 $ f^{(0)}(z_0)=\dfrac{1}{2\pi i}\oint_\gamma f(z)\frac{1}{zz_0}dz $
Which gives:
 $ f'(z_0)=\dfrac{1}{2\pi i}\oint_\gamma f(z)\frac{1}{(zz_0)^2}dz $
 $ f''(z_0)=\dfrac{2}{2\pi i}\oint_\gamma f(z)\frac{1}{(zz_0)^3}dz $
 $ f^{(n)}(z_0) = \frac{n!}{2\pi i} \oint_\gamma f(z)\frac{1}{(zz_0)^{n+1}}\, dz $
 Note that n does not have to be an integer. See ^{*}Fractional calculus.
The Taylor series becomes:
 $ f(z)=\sum_{n=0}^\infty a_n(zz_0)^n \quad \text{where} \quad a_n=\frac{1}{2\pi i} \oint_\gamma \frac{f(z)\,\mathrm{d}z}{(zz_0)^{n+1}} = \frac{f^{(n)}(z_0)}{n!} $
The ^{*}Laurent series for a complex function f(z) about a point z_{0} is given by:
 $ f(z)=\sum_{n=\infty}^\infty a_n(zz_0)^n \quad \text{where} \quad a_n=\frac{1}{2\pi i} \oint_\gamma \frac{f(z)\,\mathrm{d}z}{(zz_0)^{n+1}} = \frac{f^{(n)}(z_0)}{n!} $
The positive subscripts correspond to a line integral around the outer part of the annulus and the negative subscripts correspond to a line integral around the inner part of the annulus. In reality it makes no difference where the line integral is so both line integrals can be moved until they correspond to the same contour gamma. See also: ^{*}Ztransform
The function $ \frac{1}{(z1)(z2)} $ has poles at z=1 and z=2. It therefore has 3 different Laurent series centered on the origin (z_{0} = 0):
 For 0 < z < 1 the Laurent series has only positive subscripts and is the Taylor series.
 For 1 < z < 2 the Laurent series has positive and negative subscripts.
 For 2 < z the Laurent series has only negative subscripts.
^{*}Cauchy formula for repeated integration:
 $ f^{(n)}(a) = \frac{1}{(n1)!} \int_0^a f(z) \left(az\right)^{n1} \,\mathrm{d}z $
For every holomorphic function^{w} $ f(z)=f(x+iy)=f_x(x,y)+i f_y(x,y) $ both f_{x} and f_{y} are harmonic functions^{w}.
Any twodimensional harmonic function is the real part of a complex analytic function^{w}.
See also: complex analysis^{w}.^{[39]}
 f_{y} is the ^{*}harmonic conjugate of f_{x}.
 Geometrically f_{x} and f_{y} are related as having orthogonal trajectories, away from the zeroes of the underlying holomorphic function; the contours on which f_{x} and f_{y} are constant (^{*}equipotentials and ^{*}streamlines) cross at right angles.
 In this regard, f_{x}+if_{y} would be the complex potential, where f_{x} is the ^{*}potential function and f_{y} is the ^{*}stream function.^{[40]}
 f_{x} and f_{y} are both solutions of Laplace's equation^{w} $ \nabla^2 f = 0 $ so divergence of the gradient is zero
 ^{*}Legendre function are solutions to Legendre's differential equation.
 This ordinary differential equation is frequently encountered when solving Laplace's equation (and related partial differential equations) in spherical coordinates.
 A harmonic function^{w} is a scalar potential function therefore the curl of the gradient will also be zero.
 See ^{*}Potential theory
 Harmonic functions are real analogues to holomorphic functions.
 All harmonic functions are analytic, i.e. they can be locally expressed as power series.
 This is a general fact about ^{*}elliptic operators, of which the Laplacian is a major example.
 The value of a harmonic function at any point inside a disk is a ^{*}weighted average of the value of the function on the boundary of the disk.
 $ P[u](x) = \int_S u(\zeta)P(x,\zeta)d\sigma(\zeta).\, $
 The ^{*}Poisson kernel gives different weight to different points on the boundary except when x=0.
 The value at the center of the disk (x=0) equals the average of the equally weighted values on the boundary.
 All locally integrable functions satisfying the meanvalue property are both infinitely differentiable and harmonic.
 The kernel itself appears to simply be 1/r^n shifted to the point x and multiplied by different constants.
 For a circle (K = Poisson Kernel):
 $ f(re^{i\phi})= \oint_0^{2\pi} f(Re^{i\theta}) K(R,r,\theta\phi) d\theta $
$ \frac{d(a(x,y)+ib(x,y))}{d(x+iy)} = \frac{da+idb}{dx+idy} = \frac{(da+idb)(dxidy)}{dx^2+dy^2} = \frac{dadx+dbdy+i(dbdxdady)}{dx^2+dy^2} $ $ \frac{d(a(x,y)+ib(x,y))}{d(x+iy)} = \frac{da}{dx} +\frac{db}{dy} +i \bigg(\frac{db}{dx} \frac{da}{dy} \bigg) = {\color{red} \nabla \cdot f} + i {\color{blue} \nabla \times f} $ 
Calculus of variations
 ^{*}Calculus of variations, ^{*}Functional, ^{*}Functional analysis, ^{*}Higherorder function
Whereas calculus is concerned with infinitesimal changes of variables, calculus of variations is concerned with infinitesimal changes of the underlying function itself.
Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals.
A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is obviously a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, where the optical length depends upon the material of the medium. One corresponding concept in mechanics is the principle of least action.^{[41]}
Discrete mathematics
Set theory
 See also: ^{*}Naive set theory, ^{*}Zermelo–Fraenkel set theory, Set theory^{w}, ^{*}Set notation, ^{*}Setbuilder notation, Set^{w}, ^{*}Algebra of sets, ^{*}Field of sets, and ^{*}Sigmaalgebra
$ \varnothing $ is the empty set (the additive identity)
$ \mathbf{U} $ is the universe of all elements (the multiplicative identity)
$ a \in A $ means that a is a element^{w} (or member) of set A. In other words a is in A.
 $ \{ x \in \mathbf{A} : x \notin \mathbb{R} \} $ means the set of all x's that are members of the set A such that x is not a member of the real numbers^{w}. Could also be written $ \{ \mathbf{A}  \mathbb{R} \} $
A set^{w} does not allow multiple instances of an element. $ \{1,1,2\} = \{1,2\} $
 A multiset^{w} does allow multiple instances of an element. $ \{1,1,2\} \neq \{1,2\} $
A set can contain other sets. $ \{1,\{2\},3\} \neq \{1,2,3\} $
$ A \subset B $ means that A is a proper subset^{w} of B
 $ A \subseteq A $ means that a is a subset^{w} of itself. But a set is not a proper subset^{w} of itself.
$ A \cup B $ is the Union^{w} of the sets A and B. In other words, $ \{A+B\} $
 $ \{1,2\}+\{2,3\}=\{1,2,3\} $
$ A \cap B $ is the Intersection^{w} of the sets A and B. In other words, $ \{A \cdot B\} $ All a's in B.
 Associative: $ A \cdot \{B \cdot C\} = \{A \cdot B\} \cdot C $
 Distributive: $ A \cdot \{B + C\}=\{A \cdot B\} + \{A \cdot C\} $
 Commutative: $ \{A \cdot B\} =\{B \cdot A\} $
$ A \setminus B $ is the Set difference^{w} of A and B. In other words, $ \{A  A \cdot B\} $
 $ \overline{A} $ or $ A^c = \{U  A\} $ is the complement^{w} of A.
$ A \bigtriangleup B $ or $ A \ominus B $ is the Antiintersection^{w} of sets A and B which is the set of all objects that are a members of either A or B but not in both.
 $ A \bigtriangleup B = (A + B)  (A \cdot B) = (A  A \cdot B) + (B  A \cdot B) $
$ A \times B $ is the Cartesian product^{w} of A and B which is the set whose members are all possible ordered pairs^{w} (a, b) where a is a member of A and b is a member of B.
The Power set^{w} of a set A is the set whose members are all of the possible subsets of A.
A ^{*}cover of a set X is a collection of sets whose union contains X as a subset.^{[42]}
A subset A of a topological space X is called ^{*}dense (in X) if every point x in X either belongs to A or is arbitrarily "close" to a member of A.
 A subset A of X is ^{*}meagre if it can be expressed as the union of countably many nowhere dense subsets of X.
^{*}Disjoint union of sets $ A_0 $ = {1, 2, 3} and $ A_1 $ = {1, 2, 3} can be computed by finding:
 $ \begin{align} A^*_0 & = \{(1, 0), (2, 0), (3, 0)\} \\ A^*_1 & = \{(1, 1), (2, 1), (3, 1)\} \end{align} $
so
 $ A_0 \sqcup A_1 = A^*_0 \cup A^*_1 = \{(1, 0), (2, 0), (3, 0), (1, 1), (2, 1), (3, 1)\} $
Let H be the subgroup of the integers (mZ, +) = ({..., −2m, −m, 0, m, 2m, ...}, +) where m is a positive integer.
 Then the ^{*}cosets of H are the mZ + a = {..., −2m+a, −m+a, a, m+a, 2m+a, ...}.
 There are no more than m cosets, because mZ + m = m(Z + 1) = mZ.
 The coset (mZ + a, +) is the congruence class^{w} of a modulo m.^{[43]}
 Cosets are not usually themselves subgroups of G, only subsets.
$ \exists $ means "there exists at least one"
$ \exists! $ means "there exists one and only one"
$ \forall $ means "for all"
$ \land $ means "and" (not to be confused with wedge product^{w})
$ \lor $ means "or" (not to be confused with antiwedge product^{w})
Probability
$ \vert A \vert $ is the cardinality^{w} of A which is the number of elements in A. See measure^{w}.
$ P(A) = {\vert A \vert \over \vert U \vert} $ is the unconditional probability^{w} that A will happen.
$ P(A \mid B) = {\vert A \cdot B \vert \over \vert B \vert} $ is the conditional probability^{w} that A will happen given that B has happened.
$ P(A + B) = P(A) + P(B)  P(A \cdot B) $ means that the probability that A or B will happen is the probability of A plus the probability of B minus the probability that both A and B will happen.
$ P(A \cdot B) = P(A \cdot B \mid B)P(B) = P(A \cdot B \mid A)P(A) $ means that the probability that A and B will happen is the probability of "A and B given B" times the probability of B.
$ P(A \cdot B \mid B) = \frac{P(A \cdot B \mid A) \, P(A)}{P(B)}, $ is ^{*}Bayes' theorem
If you dont know the certainty then you can still know the probability. If you dont know the probability then you can always know the Bayesian probability. The Bayesian probability is the degree to which you expect something.
Even if you dont know anything about the system you can still know the ^{*}A priori Bayesian probability. As new information comes in the ^{*}Prior probability is updated and replaced with the ^{*}Posterior probability by using ^{*}Bayes' theorem.
From Wikipedia:Base rate fallacy:
In a city of 1 million inhabitants let there be 100 terrorists and 999,900 nonterrorists. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software. 99% of the time it behaves correctly. 1% of the time it behaves incorrectly, ringing when it should not and failing to ring when it should. Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T  B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the 'base rate fallacy' would infer that there is a 99% chance that the detected person is a terrorist. But that is not even close. For every 1 million faces scanned it will see 100 terrorists and will correctly ring 99 times. But it will also ring falsely 9,999 times. So the true probability is only 99/(9,999+99) or about 1%.
permutation^{w} relates to the act of arranging all the members of a set^{w} into some sequence^{w} or ^{*}order.
The number of permutations of n distinct objects is n!^{w}.^{[44]}
 A derangement is a permutation of the elements of a set, such that no element appears in its original position.
In other words, derangement is a permutation that has no ^{*}fixed points.
The number of ^{*}derangements of a set of size n, usually written ^{*}!n, is called the "derangement number" or "de Montmort number".^{[45]}
 The ^{*}rencontres numbers are a triangular array of integers that enumerate permutations of the set { 1, ..., n } with specified numbers of fixed points: in other words, partial derangements.^{[46]}
a combination^{w} is a selection of items from a collection, such that the order of selection does not matter.
For example, given three numbers, say 1, 2, and 3, there are three ways to choose two from this set of three: 12, 13, and 23.
More formally, a kcombination of a set^{w} S is a subset of k distinct elements of S.
If the set has n elements, the number of kcombinations is equal to the binomial coefficient^{w}
 $ \binom nk = \textstyle\frac{n!}{k!(nk)!}. $ Pronounced n choose k. The set of all kcombinations of a set S is often denoted by $ \textstyle\binom Sk $.
The central limit theorem (CLT) establishes that, in most situations, when ^{*}independent random variables are added, their properly normalized sum tends toward a normal distribution^{w} (informally a "bell curve") even if the original variables themselves are not normally distributed.^{[47]}
In statistics^{w}, the standard deviation (SD, also represented by the Greek letter sigma σ^{w} or the Latin letter s) is a measure that is used to quantify the amount of variation or ^{*}dispersion of a set of data values.^{[48]}
A low standard deviation indicates that the data points tend to be close to the mean^{w} (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values.^{[49]}
The ^{*}hypergeometric distribution is a discrete probability distribution that describes the probability of k successes (random draws for which the object drawn has a specified feature) in n draws, without replacement, from a finite population of size N that contains exactly K objects with that feature, wherein each draw is either a success or a failure.
 In contrast, the ^{*}binomial distribution describes the probability of k successes in n draws with replacement.^{[50]}
Extreme value theory is used to model the risk of extreme, rare events, such as the 1755 Lisbon earthquake. It seeks to assess, from a given ordered sample of a given random variable, the probability of events that are more extreme than any previously observed.
See also ^{*}Dirichlet distribution, ^{*}Rice distribution, ^{*}Benford's law
Logic
From Wikipedia:Inductive reasoning
Given that "if A is true then the laws of cause and effect would cause B, C, and D to be true",
 An example of deduction would be
 "A is true therefore we can deduce that B, C, and D are true".
 An example of induction would be
 "B, C, and D are observed to be true therefore A might be true".
 A is a reasonable explanation for B, C, and D being true.
For example:
 A large enough asteroid impact would create a very large crater and cause a severe impact winter that could drive the dinosaurs to extinction.
 We observe that there is a very large crater in the Gulf of Mexico dating to very near the time of the extinction of the dinosaurs
 Therefore this impact is a reasonable explanation for the extinction of the dinosaurs.
The Temptation is to jump to the conclusion that this must therefore be the true explanation. But this is not necessarily the case. The Deccan Traps in India also coincide with the disappearance of the dinosaurs and could have been the cause their extinction.
A classical example of an incorrect inductive argument was presented by John Vickers:
 All of the swans we have seen are white.
 Therefore, we know that all swans are white.
The correct conclusion would be, "We expect that all swans are white". As a logic of induction, Bayesian inference does not determine which beliefs are a priori rational, but rather determines how we should rationally change the beliefs we have when presented with evidence. We begin by committing to a prior probability for a hypothesis based on logic or previous experience, and when faced with evidence, we adjust the strength of our belief (expectation) in that hypothesis in a precise manner using Bayesian logic.
Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true. Instead of being valid or invalid, inductive arguments are either strong or weak, which describes how probable it is that the conclusion is true.
It is often said that when thinking subjectively you will see whatever you want to see. In fact this is always the case. It's just that if you truly want to see what the facts say when they are allowed to speak for themselves then you will see that too. This is called "objectivity". It is man's capacity for objective reasoning that separates him from the animals. None of us are allhuman though. All of us have a little bit of ego that tries to think subjectively. (Reality is not the egos native habitat.)
Morphisms
 See also: Higher category theory^{w} and ^{*}Multivalued function (misnomer)
Every function^{w} has exactly one output for every input.
If the function f(x) is ^{*}invertible then its inverse function^{w} f^{−1}(x) has exactly one output for every input.
If it isn't invertible then it doesn't have an inverse function.
 f(x)=x/(x1) is an ^{*}involution which is a function that is its own inverse function. f(f(x))=x
Injection^{w}  Invertible function Injection+Surjection Bijection^{w}  Surjection^{w} 
A morphism^{w} is exactly the same as a function but in Category theory^{w} every morphism has an inverse which is allowed to have more than one value or no value at all.
^{*}Categories consist of:
 Objects (usually Sets^{w})
 one source object (domain)
 one target object (codomain)
a morphism is represented by an arrow:
 $ f(x)=y $ is written $ f : x \to y $ where x is in X and y is in Y.
 $ g(y)=z $ is written $ g : y \to z $ where y is in Y and z is in Z.
The ^{*}image of y is z.
The ^{*}preimage (or ^{*}fiber) of z is the set of all y whose image is z and is denoted $ g^{1}[z] $
A picture is worth 1000 words 
A space Y is a ^{*}covering space (a fiber bundle) of space Z if the map $ g : y \to z $ is locally homeomorphic^{w}.
 A covering space is a ^{*}universal covering space if it is ^{*}simply connected.
 The concept of a universal cover was first developed to define a natural domain for the ^{*}analytic continuation of an analytic function^{w}.
 The general theory of analytic continuation and its generalizations are known as ^{*}sheaf theory.
 The set of ^{*}germs can be considered to be the analytic continuation of an analytic function.
A topological space is ^{*}(path)connected if no part of it is disconnected.
A space is ^{*}simply connected if there are no holes passing all the way through it (therefore any loop can be shrunk to a point)
 See ^{*}Homology
Composition of morphisms:
 $ g(f(x)) $ is written $ g \circ f $
 f is the ^{*}pullback of g
 f is the ^{*}lift of $ g \circ f $
 ? is the ^{*}pushforward of ?
A ^{*}homomorphism is a map from one set to another of the same type which preserves the operations of the algebraic structure:
 $ f(x \cdot y) = f(x) \cdot f(y) $
 $ f(x + y) = f(x) + f(y) $
 See ^{*}Cauchy's functional equation
 A ^{*}Functor is a homomorphism with a domain in one category and a codomain in another.
 A ^{*}group homomorphism from (G, ∗) to (H, ·) is a ^{*}function h : G → H such that
 $ h(u*v) = h(u) \cdot h(v) = h(c) $ for all u*v = c in G.
 For example $ log(a*b) = log(a) + log(b) $
 Since log is a homomorphism that has an inverse that is also a homomorphism, log is an ^{*}isomorphism of groups. The logarithm^{w} is a ^{*}group isomorphism of the multiplicative group of ^{*}positive real numbers $ \mathbb{R}^+ $ to the ^{*}additive group of real numbers, $ \mathbb{R} $.
 See also ^{*}group action and ^{*}group orbit
A ^{*}Multicategory has morphisms with more than one source object.
A ^{*}Multilinear map $ f(v_1,\ldots,v_n) = W $:
 $ f\colon V_1 \times \cdots \times V_n \to W\text{,} $
has a corresponding Linear map^{w}:$ F(v_1\otimes \cdots \otimes v_n) = W $:
 $ F\colon V_1 \otimes \cdots \otimes V_n \to W\text{,} $
Numerical methods
 See also: ^{*}Explicit and implicit methods
One of the simplest problems is the evaluation of a function at a given point.
The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient.
For polynomials, a better approach is using the ^{*}Horner scheme, since it reduces the necessary number of multiplications and additions.
Generally, it is important to estimate and control ^{*}roundoff errors arising from the use of ^{*}floating point arithmetic.
^{*}Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
^{*}Extrapolation is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.
^{*}Regression is also similar, but it takes into account that the data is imprecise.
Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function.
The ^{*}least squaresmethod is one popular way to achieve this.
Much effort has been put in the development of methods for solving ^{*}systems of linear equations.
 Standard direct methods, i.e., methods that use some ^{*}matrix decomposition
 ^{*}Gaussian elimination, ^{*}LU decomposition, ^{*}Cholesky decomposition for symmetric^{w} (or hermitian^{w}) and positivedefinite matrix^{w}, and ^{*}QR decomposition for nonsquare matrices.
 ^{*}Jacobi method, ^{*}Gauss–Seidel method, ^{*}successive overrelaxation and ^{*}conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a ^{*}matrix splitting.
^{*}Rootfinding algorithms are used to solve nonlinear equations.
 If the function is differentiable^{w} and the derivative is known, then Newton's method^{w} is a popular choice.
 ^{*}Linearization is another technique for solving nonlinear equations.
Optimization^{w} problems ask for the point at which a given function is maximized (or minimized).
Often, the point also has to satisfy some ^{*}constraints.
Differential equation^{w}: If you set up 100 fans to blow air from one end of the room to the other and then you drop a feather into the wind, what happens?
The feather will follow the air currents, which may be very complex.
One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again.
This is called the ^{*}Euler method for solving an ordinary differential equation.
Information theory
From Wikipedia:Information theory:
Information theory studies the quantification, storage, and communication of information.
Communications over a channel—such as an ethernet cable—is the primary motivation of information theory.
From Wikipedia:Quantities of information:
Shannon derived a measure of information content called the ^{*}selfinformation or "surprisal" of a message m:
 $ I(m) = \log \left( \frac{1}{p(m)} \right) =  \log( p(m) ) \, $
where $ p(m) = \mathrm{Pr}(M=m) $ is the probability that message m is chosen from all possible choices in the message space $ M $. The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of ^{*}bits.
Information is transferred from a source to a recipient only if the recipient of the information did not already have the information to begin with. Messages that convey information that is certain to happen and already known by the recipient contain no real information. Infrequently occurring messages contain more information than more frequently occurring messages. This fact is reflected in the above equation  a certain message, i.e. of probability 1, has an information measure of zero. In addition, a compound message of two (or more) unrelated (or mutually independent) messages would have a quantity of information that is the sum of the measures of information of each message individually. That fact is also reflected in the above equation, supporting the validity of its derivation.
An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity).
The more surprising a message is the more information it conveys. The message "LLLLLLLLLLLLLLLLLLLLLLLLL" conveys exactly as much information as the message "25 L's". The first message which is 25 bytes long can therefore be "compressed" into the second message which is only 6 bytes long.
Early computers
 See also: ^{*}Time complexity
 ^{*}Analog computer
 ^{*}Abacus
 ^{*}Napier's bones
 ^{*}Slide rule
 ^{*}Curta
 ^{*}Lehmer sieve
 ^{*}Z2 (computer)
Tactical thinking
Tactic X (Cooperate)  Tactic Y (Defect)  

Tactic A (Cooperate)  1, 1  5, 5 
Tactic B (Defect)  5, 5  5, 5 
 See also ^{*}Wikipedia:Strategy (game theory)
 From Wikipedia:Game theory:
In the accompanying example there are two players; Player one (blue) chooses the row and player two (red) chooses the column.
Each player must choose without knowing what the other player has chosen.
The payoffs are provided in the interior.
The first number is the payoff received by Player 1; the second is the payoff for Player 2.
Tit for tat is a simple and highly effective tactic in game theory for the iterated prisoner's dilemma.
An agent using this tactic will first cooperate, then subsequently replicate an opponent's previous action.
If the opponent previously was cooperative, the agent is cooperative.
If not, the agent is not.^{[51]}
X  Y  

A  1,1  1,1 
B  1,1  1,1 
In zerosum games the sum of the payoffs is always zero (meaning that a player can only benefit at the expense of others).
Cooperation is impossible in a zerosum game.
John Forbes Nash proved that there is a Nash equilibrium (an optimum tactic) for every finite game.
In the zerosum game shown to the right the optimum tactic for player 1 is to randomly choose A or B with equal probability.
Strategic thinking differs from tactical thinking by taking into account how the short term goals and therefore optimum tactics change over time.
For example the opening, middlegame, and endgame of chess require radically different tactics.
See also: ^{*}Reverse game theory
Physics
 See also: Wikisource:The Mathematical Principles of Natural Philosophy (1846) and ^{*}Galilean relativity
 Reality is what doesnt go away when you arent looking at it.
 Something is known Beyond a reasonable doubt if any doubt that it is true is unreasonable. A doubt is reasonable if it is consistent with the laws of cause and effect.
In the four rules, as they came finally to stand in the 1726 edition, Newton effectively offers a methodology for handling unknown phenomena in nature and reaching towards explanations for them.
 Rule 1: We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
 Rule 2: Therefore to the same natural effects we must, as far as possible, assign the same causes.
 Rule 3: The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
 Rule 4: In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, not withstanding any contrary hypothesis that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.
 Newtonian mechanics^{w}, Lagrangian mechanics^{w}, and Hamiltonian mechanics^{w}
 The difference between the net kinetic energy and the net potential energy is called the “Lagrangian.”
 The action is defined as the time integral of the Lagrangian.
 The Hamiltonian is the sum of the kinetic and potential energies.
 ^{*}Noether's theorem states that every differentiable symmetry of the ^{*}action of a physical system has a corresponding ^{*}conservation law.
 ^{*}Special relativity, and ^{*}General relativity
 Energy is conserved in relativity and proper velocity is proportional to momentum at all velocities.
Highly recommend:
 Thinking Physics Is Gedanken Physics by Lewis Carroll Epstein
 Understanding physics by Isaac Asimov
Relativity
 Most confusion about relativity centers around a poor understanding of relativity of simultaneity.
 Since the length of an object is the distance from head to tail at one simultaneous moment, it follows that if two observers disagree about what events are simultaneous then they will also disagree on the length of objects.
 If a line of clocks appear synchronized to a stationary observer and appear to be out of sync to that same observer after accelerating to a certain velocity then it follows that during the acceleration the clocks ran at different speeds. Some may even run backwards. This line of reasoning leads to general relativity.
 The gravitational time dilation at any point in a gravity well is equal to the time dilation that an object falling to that point would experience due its velocity (which never reaches "c") alone.
There might be a loophole to the law that you can't travel faster than light: If the distance between the front of rocket and the back can be made zero then it's conceivable that it could travel faster than light.
Dimensional analysis
 See also: ^{*}Natural units
Any physical law that accurately describes the real world must be independent of the units (e.g. km or mm) used to measure the physical variables.
Consequently, every possible commensurate equation for the physics of the system can be written in the form
 $ a_0 \cdot D_0 = (a_1 \cdot D_1)^{p_1} (a_2 \cdot D_2)^{p_2}...(a_n \cdot D_n)^{p_n} $
The dimension, D_{n}, of a physical quantity can be expressed as a product of the basic physical dimensions length (L), mass (M), time (T), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J), each raised to a rational power.
Suppose we wish to calculate the ^{*}range of a cannonball when fired with a vertical velocity component $ V_\mathrm{y} $ and a horizontal velocity component $ V_\mathrm{x} $, assuming it is fired on a flat surface.
The quantities of interest and their dimensions are then
 range as L_{x}
 $ V_\mathrm{x} $ as L_{x}/T
 $ V_\mathrm{y} $ as L_{y}/T
 g as L_{y}/T^{2}
The equation for the range may be written:
 $ range = (V_x)^a (V_y)^b (g)^c $
Therefore
 $ \mathsf{L}_\mathrm{x} = (\mathsf{L}_\mathrm{x}/\mathsf{T})^a\,(\mathsf{L}_\mathrm{y}/\mathsf{T})^b (\mathsf{L}_\mathrm{y}/\mathsf{T}^2)^c\, $
and we may solve completely as $ a=1 $, $ b=1 $ and $ c=1 $.
Atoms
 See also: Periodic table^{w} and Spatial_structure_of_the_electron
The first pair of electrons fall into the ground shell. Once that shell is filled no more electrons can go into it. Any additional electrons go into higher shells.
The nucleus however works differently. The first few neutrons form the first shell. But any additional neutrons continue to fall into that same shell which continues to expand until there are 49 pairs of neutrons in that shell.
The electric force between two electrons is 4.166 * 10^{42} times stronger than the gravitational force
The energy required to assemble a sphere of uniform charge density = $ \frac{3}{5}\frac{Q^2}{4 \pi \epsilon_0 r} $
 For Q=1 electron charge and r=1.8506 angstrom thats 4.669 ev. That energy is stored in the electric field of the electron.
 The energy per volume stored in an electric field is proportional to the square of the field strength so twice the charge has 4 times as much energy.
 4*4.669 = 18.676.
Mass of electron = M_{e} = 510,999 ev
Mass of proton = M_{p} = 938,272,000 ev
Mass of neutron = M_{n} = 939,565,000 ev
 M_{n} = M_{p} + M_{e} + 782,300 ev
Mass of muon = M_{μ} = 105.658 ev = 206.7683 * M_{e}
Mass of helium atom = 3,728,400,000 = 4*M_{e}+4*M_{p} 52.31 M_{e}
 The missing 52.31 electron masses of energy is called the mass deficit or nuclear binding energy. Fusing hydrogen into helium releases this energy.
Iron can be fused into heavier elements too but doing so consumes energy rather than releases energy.
The total outward force for a solid 4dimensional sphere of uniform density in Clifford rotation is $ \frac{4}{5} \cdot \frac{m v^2}{r} $
The angular momentum of a solid 4dimensional sphere of uniform density is $ \frac{2}{3} \cdot mvr $
Empirically determined values for the size of atoms:
 Diatomic Hydrogen (Z=2) = 1.9002 angstroms
 Helium (Z=2) = 1.8506 angstroms
In 3 dimensions the force between 2 electrons is:
 $ F = \frac{1}{4\pi\varepsilon_0} { e_1 e_2 \over r^2} $
 where m_{e} is the electron's mass, e_{1} is the charge of the electron,
 $ \varepsilon_0 = \frac{1}{180.95} \frac{e^2}{\text{eV} Å} $
 but in 4 dimensions:
 $ \varepsilon = \frac{2 \varepsilon_0}{\pi r} $
 where r is the distance at which the inverse square law gives the same result as the inverse cube law. In other words, the distance at which the inverse square law of the macroscopic world gives way to the inverse cube law of the microscopic world.
The angular momentum is:
 $ \frac{2}{3} \cdot m_\mathrm{e} v r = \hbar $
 where ħ is reduced Planck constant
 $ \hbar={{h}\over{2\pi}} = 1.054\ 571\ 800(13)\times 10^{34}\text{J}{\cdot}\text{s} $
 Therefore:
 $ v = \frac{3}{2} \cdot \frac{\hbar}{m_\mathrm{e}} \frac{1}{r} $
Density and thermal expansion
Densities:
 Crystalline solids: 1.2
 Amorphous solids: 1.1
 liquids: 1
Water ice is an exception. Ice has a density of 0.9167
From Wikipedia:Thermal expansion
Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so, high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is higher compared to that of crystals.
For an ideal gas, the volumetric thermal expansion (i.e., relative change in volume due to temperature change) depends on the type of process in which temperature is changed. Two simple cases are where the pressure is held constant (Isobaric process), or when the volume (Isochoric process) is held constant.
The derivative of the ideal gas law, $ PV = T $, is
 $ P dV + V dP = dT $
where $ P $ is the pressure, $ V $ is the specific volume, and $ T $ is temperature measured in energy units.
By definition of an isobaric thermal expansion, we have $ dP=0 $, so that $ P dV=dT $, and the isobaric thermal expansion coefficient is
 $ \alpha_{P = C^{te}} \equiv \frac{1}{V} \left(\frac{d V}{d T}\right) = \frac{1}{V} \left(\frac{1}{P}\right) = \frac{1}{PV} = \frac{1}{T} $.
Similarly, if the volume is held constant, that is if $ dV=0 $, we have $ V dP=dT $, so that the isovolumic thermal expansion is
 $ \alpha_{V=C^{te}} \equiv \frac{1}{P} \left(\frac{d P}{d T}\right) = \frac{1}{P} \left(\frac{1}{V}\right) = \frac{1}{P V} = \frac{1}{T} $.
For a solid, we can ignore the effects of pressure on the material, and the volumetric thermal expansion coefficient can be written:
 $ \alpha_V = \frac{1}{V}\,\frac{dV}{dT} $
where $ V $ is the volume of the material, and $ dV/dT $ is the rate of change of that volume with temperature.
This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an expansion of 0.2%. If we had a block of steel with a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. The volumetric expansion coefficient would be 0.2% for 50 K, or 0.004% K^{−1}.
If we already know the expansion coefficient, then we can calculate the change in volume
 $ \frac{\Delta V}{V} = \alpha_V\Delta T $
where $ \Delta V/V $ is the fractional change in volume (e.g., 0.002) and $ \Delta T $ is the change in temperature (50 °C).
For common materials like many metals and compounds, the thermal expansion coefficient is inversely proportional to the melting point. In particular for metals the relation is:
 $ \alpha \approx \frac{0.020}{M_P} $
for halides and oxides
 $ \alpha \approx \frac{0.038}{M_P}  7.0 \cdot 10^{6} \, \mathrm{K}^{1} $
Quasiparticles
 See also: Wikipedia:List of quasiparticles and Wikipedia:Quasiparticle
A hole is a region with a net surplus of positive charges.
An antihole is a region with a net surplus of negative charges.
Electricity is the flow of holes and antiholes.
A ptype semiconductor only conducts holes.
An ntype semiconductor only conducts antiholes.
Holes and antiholes combine at the junction of a forward biased diode.
Holes and antiholes form at and move in opposite directions away from the junction of a reverse biased diode.
From Wikipedia:Exciton:
An exciton is a bound state of an electron and an electron hole which are attracted to each other by the electrostatic Coulomb force. It is an electrically neutral quasiparticle that exists in insulators, semiconductors and in some liquids. The wavefunction of the bound state is said to be hydrogenic, an exotic atom state akin to that of a hydrogen atom. However, the binding energy is much smaller and the particle's size much larger than a hydrogen atom. This is because of both the screening of the Coulomb force by other electrons in the semiconductor (i.e., its dielectric constant), and the small effective masses of the excited electron and hole. Provided the interaction is attractive, an exciton can bind with other excitons to form a biexciton, analogous to a dihydrogen molecule.
Tidal acceleration
 See also: Formation_of_the_Solar_System^{w}
Image shows an approximation of the shape (^{*}Equipotentials) of a rapidly spinning planet. North pole is at the top. South pole is at the bottom. The equator reaches orbital velocity.
Orbital velocity:
 $ v_o = \sqrt{\frac{GM}{r}} $
Orbital period:
 $ T = 2\pi\sqrt{\frac{r^3}{GM}} $
Orbital angular momentum:
 $ mvr \quad = \quad m \Bigg ( \sqrt{\frac{GM}{r}} \Bigg ) r \quad = \quad m \sqrt{GMr} $
Rotational angular momentum of solid sphere:
 $ L=I \omega = \frac{2}{5}mr^2 \frac{v}{r} = \frac{2}{5}mvr $
where:
 r is the orbit's semimajor axis^{w}
 G is the gravitational constant^{w},
 M is the mass of the more massive body.
 m is the mass of the less massive body.
Moons orbital angular momentum is 28.73 * 10^33 Js
Earths rotational angular momentum is 7.079 * 10^33 Js
The total amount of angular momentum for the EarthMoon system is 28.73 + 4.6 = 33.33 * 10^33 Js
Moons current orbit is 384,399 km. Its orbital period is 2.372 * 10^{6} seconds. (27 days, 10 hours, 50 minutes). Its orbital velocity is 1.022 km/s.
^{*}Roche limit for the moon is
 Fluid: 18,381 km fluid
 384,399 / 18,381 = 20.9
 Orbital momentum of moon at fluid Roche limit = 28.73 * 10^33 Js / sqrt(20.9) = 6.3 * 10^33
 Earth would spin (28.736.3+4.6)/4.6 = 5.876 times faster
 Rigid: 9,492 km
 384,399 / 9,492 = 40.5
 Orbital momentum of moon at rigid Roche limit = 28.73 * 10^33 Js / sqrt(40.5) = 4.5 * 10^33
 Earth would spin (28.734.5+4.6)/4.6 = 6.27 times faster
Orbital radius with period = 4 hours:
 $ \sqrt[3]{G \cdot 1 \text{ Earth mass} \cdot \Bigg ( \frac{4\text{ hours}}{2\pi} \Bigg )^2 } $ = 12,800 km
Alternately we can ask what the orbital period would be if Earth had a moon (not necessarily the moon) at 18,381 km.
 $ T = 2\pi\sqrt{\frac{(18,381 \text{ km})^3}{G*1 \text{ Earth mass} }} $ = 7.554 hours
 Earth would spin 24/7.554 = 3.177 times faster
 Earths angular momentum would be 3.177 * 4.6 * 10^33 Js = 14.6142 * 10^33 Js
 Our current Moons angular momentum would be 28.73  (14.6142  4.6) * 10^33 Js = 18.7158 * 10^33 Js
 Thats 18.7158 / 28.73 = 0.65
 So the current moons orbit would have been 0.65^2 * 384,399 km = 0.424 * 384,399 km = 162985 km
Tidal rhythmites are alternating layers of sand and silt laid down offshore from estuaries having great tidal flows. Daily, monthly and seasonal cycles can be found in the deposits. This geological record indicates that 620 million years ago there were 400±7 solar days/year
The motion of the Moon can be followed with an accuracy of a few centimeters by lunar laser ranging. Laser pulses are bounced off mirrors on the surface of the moon. The results are:
 +38.08±0.04 mm/yr (384,399 km / 63.4 billion years)
 1.42*10^24 Js/yr (33.33 * 10^33 Js / 23 billion years)
 1.42*10^26 Js/century
The corresponding change in the length of the day can be computed:
 (1.42*10^26)/(4.6 * 10^33) * 24 hours = 3.087*10^8 * 24 hours = +2.667 ms/century
620 million yrs ago the Moon had 1.42*10^24 * 620*10^6 = 0.88*10^33 Js less angular momentum. The moons orbit was therefore 384,399 km * ((28.730.88)/28.73)^2 = 361,211 km. One month lasted 2.161 * 10^{6} seconds. (25 days, 16 minutes, 40 seconds)
The Earth spun (4.6+0.88)/4.6 = 1.19 times faster so the day was 24 hours / 1.19 = 20.1680672 hours
The year was 400 "days" * 20.1680672 hours per "day" = 336.135 24hour periods
Earths orbit was therefore
 $ \sqrt[3]{G \cdot 1 \text{ Solar mass} \Bigg ( \frac{336.134453 \text{ days}}{2\pi} \Bigg )^2} $ = 0.9461 au
Therefore Earth must be receding from the sun at 13 m/yr
Planets
#  Planet  g/cm^3  km  g's  au 

1  Mercury  5.427  2,440  0.377  0.387 
2  Venus  5.243  6,052  0.904  0.723 
3  Earth  5.515  6,371  1  1.000 
4  Mars  3.934  3,390  0.378  1.524 
5  Ceres  2.093  476.2  0.028  2.766 
6  Jupiter  1.326  69,911  2.528  5.203 
7  Saturn  0.687  58,232  1.065  9.537 
8  Ouranos  1.270  25,362  0.904  19.191 
9  Neptune  1.638  24,622  1.137  30.069 
From Wikipedia:16 Psyche:
16 Psyche is one of the ten most massive asteroids in the asteroid belt. It is over 200 km (120 mi) in diameter and contains a little less than 1% of the mass of the entire asteroid belt. It is thought to be the exposed iron core of a protoplanet.
Brown dwarfs
Hydrogen  Atomic radius  g/cm^{3}  Jupiter volume  g's 

Liquid  1  0.07085  0.053 M_{Jup}  0.14 
Metallic  1/ 4  4.5344  3.400 M_{Jup}  9.00 
Double  1/ 5.657  12.8250  9.669 M_{Jup}  25.56 
Triple  1/ 8  36.2752  27.300 M_{Jup}  72.16 
Quadruple  1/ 11.31  102.6  77.350 M_{Jup}  204.40 
Quintuple  1/ 16  290.2016  219.000 M_{Jup}  578.80 
Sextuple  1/ 22.63  820.8140  618.800 M_{Jup}  1636.00 
As can be seen in the image to the right, all planets (Brown dwarfs) from 1 to 100 Jupiter masses are about 1 Jupiter radius which is 69,911 km. The largest "puffy" planets are 2 Jupiter radii. 1 Jupiter volume = 1.431×10^{15} km^{3}
This suggests that the pressure an electron shell (in degenerate matter) can withstand without again becoming degenerate (^{*}Electron degeneracy pressure) is inversely proportional to the sixth power of its radius:
 $ P=\frac{1}{r^6} $
(This formula only applies to degenerate matter like metallic hydrogen. Nondegenerate matter can withstand far more pressure).
If so then the maximum size (radius) that a planet composed entirely of one (degenerate) element could grow would depend only on, and be inversely proportional to, the atomic mass of its atoms. (Use 2 for the atomic mass of diatomic hydrogen).
Simplified calculation of radius of brown dwarf as core grows from zero to 1 Jupiter radius:
 r is radius of core with 2.83 (sqrt(2)^{3}) times the density of overlying material
Rock floats on top of the metallic hydrogen but iron sinks to the Core. 0.1% of the mass of the brown dwarf is iron. Assuming iron density of 231.85 g/cm3 (as in Earths core), the gravity of the iron core will cause the brown dwarf to be about 3% smaller then it would be otherwise.
Dark matter
Dark matter is a type of unidentified matter that may constitute about 80% of the total matter in the universe. It has not been directly observed, but its gravitational effects are evident in a variety of astrophysical measurements. The primary evidence for dark matter is that calculations show that many galaxies would fly apart instead of rotating if they did not contain a large amount of matter beyond what can be observed.
From Wikipedia:Gravitational microlensing
Microlensing allows the study of objects that emit little or no light. With microlensing, the lens mass is too low for the displacement of light to be observed easily, but the apparent brightening of the source may still be detected. In such a situation, the lens will pass by the source in seconds to years instead of millions of years.
The Einstein radius, also called the Einstein angle, is the angular radius of the Einstein ring in the event of perfect alignment. It depends on the lens mass M, the distance of the lens d_{L}, and the distance of the source d_{S}:
 $ \theta_E = \sqrt{\frac{4GM}{c^2} \frac{d_S  d_L}{d_S d_L}} $ (in radians).
For M equal to 60 Jupiter masses, d_{L} = 4000 parsecs, and d_{S} = 8000 parsecs (typical for a Bulge microlensing event), the Einstein radius is 0.00024 arcseconds (angle subtended by 1 au at 4000 parsecs). By comparison, ideal Earthbased observations have angular resolution around 0.4 arcseconds, 1660 times greater. One parsec is equal to about 3.26 lightyears (30 trillion km).
Any brown dwarf surrounded by a circumstellar disk larger and thicker than 1 au would therefore be virtually completely undetectable.
Stars
 See also: ^{*}Stellar evolution, ^{*}Helium flash, ^{*}Schönberg–Chandrasekhar limit, Coronal_heating_problem^{w}
Fusion of diatomic hydrogen begins around 60 Jupiter masses. Fusion of monatomic helium requires significantly more pressure.
Fusion releases energy that heats the star causing it to expand. The expansion reduces the pressure in the core which reduces the rate of fusion. So the rate of fusion is self limiting. A low mass star has a lifetime of billions of years. A high mass star has a lifetime of only a few tens of millions of years despite starting with more hydrogen.
Low mass stars are far more common than high mass stars. The masses of the two component stars of NGC 3603A1, A1a and A1b, determined from the orbital parameters are 116 ± 31 M☉ and 89 ± 16 M☉respectively. This makes them the two most massive stars directly measured, i.e. not estimated from models.
The luminousity of a star is:
 $ L = 4 \pi R^2 \sigma T^4 $
 where σ is the ^{*}Stefan–Boltzmann constant:
 $ \sigma = \frac{2\pi^5k_{\rm B}^4}{15h^3c^2} = \frac{\pi^2k_{\rm B}^4}{60\hbar^3c^2} = 5.670373(21) \, \times 10^{8}\ \textrm{J}\,\textrm{m}^{2}\,\textrm{s}^{1}\,\textrm{K}^{4} $
The luminosity of the sun at 5772 K and 695,700 km is 3.828×10^26 Watts
 Thats 6,297,000 watts/m^{2}
The brightness of sunlight at the surface of the Earth is 1400 watt/meter^{2}
The plasma inside a star is nonrelativistic. A relativistic plasma with a thermal ^{*}distribution function has temperatures greater than around 260 keV, or ^{*}3.0 * 10^{9} K. Those sorts of temperatures are only created in a supernova. The core of the sun is about 15 * 10^{6} K.
Plasmas, which are normally opaque to light, are transparent to light with frequency higher than the ^{*}plasma frequency. The plasma literally cant vibrate fast enough to keep up with the light. Plasma frequency is proportional to the square root of the electron density.
 $ \omega = \sqrt{\frac{n_\mathrm{e} q_e^{2}}{m_e \varepsilon_0}} $
 where
 n_{e} = number of electrons / volume.
See also: ^{*}Bremsstrahlung#Thermal_bremsstrahlung
From 0.3 to 1.2 solar masses, the region around the stellar core is a radiative zone. (The light frequency is higher than the plasma frequency). The radius of the radiative zone increases monotonically with mass, with stars around 1.2 solar masses being almost entirely radiative.
From Wikipedia:Convective zone
In main sequence stars of less than about 1.3 solar masses, the outer envelope of the star contains a region of relatively low temperature which causes the frequency of the light to be lower than the plasma frequency which causes the opacity to be high enough to produce a steep temperature gradient. This produces an outer convection zone. The Sun's convection zone extends from 0.7 solar radii (500,000 km) to near the surface.
From Wikipedia:Cepheid variable
A Cepheid variable is a type of star that pulsates radially, varying in both diameter and temperature and producing changes in brightness with a welldefined stable period and amplitude.
A strong direct relationship between a Cepheid variable's luminosity and pulsation period allows one to know the true luminosity of a Cepheid by simply observing its pulsation period. This in turn allows one to determine the distance to the star, by comparing its known luminosity to its observed brightness.
The pulsation of cepheids is known to be driven by oscillations in the ionization of helium. From fully ionized (more opaque) He++ to partially ionized (more transparent) He+ and back to He++. See ^{*}Kappa mechanism.
In the swelling phase. Its outer layers expand, causing them to cool. Because of the decreasing temperature the degree of ionization also decreases. This makes the gas more transparent, and thus makes it easier for the star to radiate its energy. This in turn will make the star start to contract. As the gas is thereby compressed, it is heated and the degree of ionization again increases. This makes the gas more opaque, and radiation temporarily becomes captured in the gas. This heats the gas further, leading it to expand once again. Thus a cycle of expansion and compression (swelling and shrinking) is maintained.
From Wikipedia:Instability strip
In normal AFG stars He is neutral in the stellar photosphere. Deeper below the photosphere, at about 25,000–30,000K, begins the He II layer (first He ionization). Second ionization (He III) starts at about 35,000–50,000K.
Recombination and Reionization 

The first phase change of hydrogen in the universe was recombination due to the cooling of the universe to the point where electrons and protons form neutral hydrogen. The universe was opaque before the recombination, due to the scattering of photons (of all wavelengths) off free electrons, but it became increasingly transparent as more electrons and protons combined to form neutral hydrogen atoms. The Dark Ages of the universe start at that point, because there were no light sources. The second phase change occurred once objects started to condense in the early universe that were energetic enough to reionize neutral hydrogen. As these objects formed and radiated energy, the universe reverted to once again being an ionized plasma. (See ^{*}Warm–hot intergalactic medium). At this time, however, matter had been diffused by the expansion of the universe, and the scattering interactions of photons and electrons were much less frequent than before electronproton recombination. Thus, a universe full of low density ionized hydrogen will remain transparent, as is the case today. 
The Sun's photosphere has a temperature between 4,500 and 6,000 K. Negative hydrogen ions (H) are the primary reason for the highly opaque nature of the photosphere.
As the star fuses hydrogen into heavier elements the heavier elements build up in the core. Eventually the outer layers of the star are blown away and all thats left is the core. We call whats left a white dwarf.
White dwarfs
^{*}Z  ^{*}A  Element  (ppm)  g/cm^{3}  g/cm^{3}  radius 

1  1  ^{*}Hydrogen  739,000  0.07085  290.2  71,492 
1  2  ^{*}Deuterium  100  0.1417  580.4  35,746 
2  4  ^{*}Helium  240,000  0.125  512  35,746 
4  8  ^{*}Beryllium  0  2  8,192  17,873 
8  16  ^{*}Oxygen  10,400  32  131,072  8,936 
6  12  ^{*}Carbon  4,600  10.125  41,472  11,915 
10  20  ^{*}Neon  1,340  78.125  320,000  7,149 
26  56  ^{*}Iron56  1,090  4358  15,748,096  2,553 
7  14  ^{*}Nitrogen  960  18.76  76,841  10,213 
14  28  ^{*}Silicon  650  300.125  1,229,312  5,107 
12  24  ^{*}Magnesium  580  162  663,552  5,958 
16  32  ^{*}Sulfur  440  512  2,097,152  4,468 
^{1}H ^{2}D  
H  +  H  =  ^{2}D  
D  +  D  =  ^{4}He  
^{4}He  
He  +  He  =  Unstable  
He  ×  3  =  ^{12}C  
He  +  C  =  ^{16}O  
^{12}C  
C  +  C  =  ^{24}Mg  
C  +  O  =  ^{28}Si  
^{16}O  
O  +  O  =  ^{32}S  
O  +  Mg  =  ^{40}Ca  
^{24}Mg  
Mg  +  S  =  ^{56}Fe  
^{28}Si ^{32}S  
Si  +  Si  =  ^{56}Fe  
Si  +  S  =  ^{60}Ni  
S  +  S  =  ^{64}Zn  
^{56}Fe ^{60}Ni  
^{14}N and ^{20}Ne are produced when the outer layers become convective. ^{8}Be, ^{18}F, and ^{26}Al are unstable. 
A white dwarf is about the same size as the Earth but is far denser and far more massive. A typical temperature for a white dwarf is 25,000 K. That would make its surface brightness 350 times the surface brightness of the sun.
Simplified calculation of radius of White dwarf as core grows from zero to half the original radius:
 r is radius of core. The core has 16 times the density (twice the atomic number) of the overlying material. The final state has half the radius and twice the mass of the original white dwarf.
A 0.6 solar mass White dwarf is 8900 km in radius which Is 8.03 times smaller than Jupiter which suggests a composition of oxygen. It has a surface gravity of
 $ \frac{G \cdot 0.6 \text{ solar mass}}{(8900 \text{ km})^2} $ = 103,000 g's
Its density is 404,000 g/cm3 which is 12,625 times denser than oxygen in its ground state. Thats 23.285^{3} times denser. Sqrt(2)^{9} = 22.63
A 1.13 solar mass White dwarf is 4500 km in radius which Is 15.9 times smaller than Jupiter which suggests a composition of sulfur. It has a surface gravity of
 $ \frac{G \cdot 1.13 \text{ solar mass}}{(4500 \text{ km})^2} $ = 755,000 g's
Its density is 5.887 * 10^{6} g/cm3 which is 11,498 times denser than sulfur in its ground state. Thats 22.57^{3} times denser.
For a white dwarf made of iron:
 Radius: 2,553 km
 Surface area: 8.2*10^{7} km^{2}
 Mass per surface area: 3.8 * 10^{13} g/mm^{2}
 Mass: 4.454 * 10^{7} g/cm3 * (4/3)*pi*(2553 km)^{3} in solar masses = 1.56 solar masses.
 Surface gravity: 3.24 * 10^{6} g's
 Density: (sqrt(2)^{9})^{3} * 3844.75 g/cm3 = 4.454 * 10^{7} g/cm3
 Core pressure: 1.8 * 10^{19} bars
The core of a white dwarf with a mass greater than the ^{*}Chandrasekhar limit (1.44 solar masses) will undergo gravitational collapse and become a neutron star.
Back to top
Neutron stars
 See also: ^{*}Gravitoelectromagnetism
Assuming a solid honeycomb array of neutron pairs with radius 1 fm, a sheet of ^{*}neutronium (if such a thing existed) would have a density of 1.2893598 g/mm^{2}.
Density of a liquid neutron star made of neutron pairs with radius 1 fm would be 479.8×10^{12} g/cm^{3}
The maximum observed mass of neutron stars is about 2.01 M_{☉}.
At that density a 2 solar mass neutron star would have a radius of 12.5544 km
Its gravitational binding energy would be 0.282 solar masses of energy
The ^{*}Tolman–Oppenheimer–Volkoff limit (or TOV limit) is an upper bound to the mass of cold, nonrotating neutron stars, analogous to the Chandrasekhar limit for white dwarf stars. Observations of GW170817 suggest that the limit is close to 2.17 solar masses.
The equation of state^{w} for a neutron star is not yet known.
A 2 solar mass neutron star with radius of 12.5544 km would have a surface gravity of:
 $ \frac{G \cdot 2 \text{ solar mass}}{(12.5544 \text{ km})^2} $ = 1.717 10^{11} g's
The pressure in its core would be $ \frac{3}{8 \pi} \frac{G \cdot Mass_{active} \cdot Mass_{passive}}{r^4} = $ 5.072 * 10^{28} bar = 5.071 * 10^{28} bar
Its moment of inertia is: 0.4*2 solar masses*(12.5544 km)^2 = 2.507×10^38 kg m^2
From Wikipedia:Glitch (astronomy)
A glitch (See ^{*}Global_resurfacing_event) is a sudden increase of up to 1 part in 10^{6} in the rotational frequency of a rotationpowered pulsar. Following a glitch is a period of gradual recovery, lasting 10100 days, where the observed periodicity slows to a period close to that observed before the glitch.
If mass is constant then
 $ radius = 1/\sqrt[3]{density} $
If angular momentum is constant then
 $ frequency = v/r = 1/r^2 = 1/(1/\sqrt[3]{density})^2 = density^{\frac{2}{3}} $
The moment of inertia of a solid crust 1 cm thick is: 0.666*((1.2*479.8×10^12 g/cm3)*1 cm*4*pi*(12.5544 km)^2)*(12.5544 km)^2 = 1.197×10^33 kg m^2. Thats 1/209,440 of the total moment of inertia. 1 cm doesn't seem like much but if each Neutron were the size of an atom then that one centimeter would be one or two km.
External link: Pulsar glitches and their impact on neutronstar astrophysics
From Wikipedia:Supermassive black hole
A supermassive black hole (SMBH or SBH) is the largest type of ^{*}black hole, on the order of hundreds of thousands to billions of ^{*}solar masses (M_{☉}), and is found in the centre of almost all currently known massive galaxies.
The mass of the SMBH in a galaxy is often close to the combined mass of the galaxy's globular clusters.
The mean ratio of black hole mass to bulge mass is now believed to be approximately 1:1000.
 The most massive galaxy known is 30 trillion solar masses.
Some supermassive black holes appear to be over 10 billion solar masses.
From Wikipedia:Quasar:
A quasar is an active galactic nucleus of very high luminosity. A quasar consists of a supermassive black hole surrounded by an orbiting accretion disk of gas. The most powerful quasars have luminosities exceeding 2.6×10^{14} ℒ_{☉} (10^{41} W or 17.64631 M_{☉}/year), thousands of times greater than the luminosity of a large galaxy such as the Milky Way.
Growing at a rate of 17.6/1.4^2 solar mass per year a 60 billion solar mass Black hole would take 6.66 billion years to reach full size. See ^{*}TON 618
Growing at a rate of 17.6/2.8^2 solar mass per year a 240 billion solar mass Black hole would take 107 billion years to reach full size.
Masses of supermassive black holes in billions of solar masses:
 240? (Hypothetical based on Zipf's law)
 120? (Hypothetical)
 80? (Hypothetical)
 66
 40
 33
 30
 23
 21
 20
 19.5
 18
 17
 15
 14
 14
 13.5
 13
 12
 12.4
 11
 11
 10
 10
 9.8
 9.7
 9.1
 7.8
 7.2
 7.2
 6.9
From Wikipedia:Eddington luminosity
The Eddington luminosity, also referred to as the Eddington limit, is the maximum luminosity a body (such as a star) can achieve when there is balance between the force of radiation acting outward and the gravitational force acting inward. The state of balance is called hydrostatic equilibrium. When a star exceeds the Eddington luminosity, it will initiate a very intense radiationdriven stellar wind from its outer layers.
For pure ionized hydrogen
 $ \begin{align}L_{\rm Edd}&=\frac{4\pi G M m_{\rm p} c} {\sigma_{\rm T}}\\ &\cong 1.26\times10^{31}\left(\frac{M}{M_\odot}\right){\rm W} = 1.26\times10^{38}\left(\frac{M}{M_\odot}\right){\rm erg/s} = 3.2\times10^4\left(\frac{M}{M_\odot}\right) L_\odot \end{align} $
where $ M_\odot $ is the mass of the Sun and $ L_\odot $ is the luminosity of the Sun.
Gammaray bursts
From Wikipedia:Gammaray burst
Gammaray bursts (GRBs) are extremely energetic explosions that have been observed in distant galaxies. They are the brightest electromagnetic events known to occur in the universe. Bursts can last from ten milliseconds to several hours. After an initial flash of gamma rays, a longerlived "afterglow" is usually emitted at longer wavelengths (Xray, ultraviolet, optical, infrared, microwave and radio).
Assuming the gammaray explosion to be spherical, the energy output of ^{*}GRB 080319B would be within a factor of two of the restmass energy of the Sun (the energy which would be released were the Sun to be converted entirely into radiation).
No known process in the universe can produce this much energy in such a short time.
GRB 111209A is the longest lasting gammaray burst (GRB) detected by the Swift GammaRay Burst Mission on December 9, 2011. Its duration is longer than 7 hours.
On average two long gamma ray burst occurs every 3 days and have average redshift of 2. Making the simplifying assumption that all long gamma ray bursts occur at exactly redshift 2 (9.2 * 10^{9} light years) we get one gamma ray burst per (1,635,000 light years)^{3}
There are 12 galaxies per cubic megaparsec. Thats 1 galaxy per (1,425,000 light years)^{3}
One short grb per 3 days at average redshift of 0.5 (4.6 * 10^{9} light years) gives 1 grb per (1,300,000 light years)^{3}
 
Surface gravity = 29.6 g's, Density = 3.461099×10^{8} g/mm^2 
Ultrahighenergy Cosmic rays
External links: http://hires.physics.utah.edu/reading/uhecr.html, The cosmic ray energy spectrum as measured using the Pierre Auger Observatory
From Wikipedia:Cosmic ray
^{*}Cosmic rays are highenergy radiation, mainly originating outside the Solar System and even from distant galaxies. Upon impact with the Earth's atmosphere, cosmic rays can produce showers of secondary particles that sometimes reach the surface. Composed primarily of highenergy protons and atomic nuclei, they are of uncertain origin. Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Active galactic nuclei are also theorized to produce cosmic rays.
From Wikipedia:Ultrahighenergy cosmic ray
In ^{*}astroparticle physics, an ultrahighenergy cosmic ray (UHECR) is a cosmic ray particle with a kinetic energy greater than than 1×10^{18} ^{*}eV, far beyond both the ^{*}rest mass and energies typical of other cosmic ray particles.
An extremeenergy cosmic ray (EECR) is an UHECR with energy exceeding 5×10^{19} eV (about 8 joule), the socalled ^{*}Greisen–Zatsepin–Kuzmin limit (GZK limit). This limit should be the maximum energy of cosmic ray protons that have traveled long distances (about 160 million light years), since higherenergy protons would have lost energy over that distance due to scattering from photons in the ^{*}cosmic microwave background (CMB). However, if an EECR is not a proton, but a nucleus with $ A $ nucleons, then the GZK limit applies to its nucleons, each of which carry only a fraction $ 1/A $ of the total energy.
These particles are extremely rare; between 2004 and 2007, the initial runs of the ^{*}Pierre Auger Observatory (PAO) detected 27 events with estimated arrival energies above 5.7×10^{19} eV, i.e., about one such event every four weeks in the 3000 km^{2} area surveyed by the observatory.
At that rate 5.46 * 10^{18} particles will fall onto a star with radius 1 million kilometers every hundred million years.
From Wikipedia:OhMyGod particle:
The OhMyGod particle was an ultrahighenergy cosmic ray detected on the evening of 15 October 1991 by the Fly's Eye Cosmic Ray Detector. Its observation was a shock to astrophysicists, who estimated its energy to be approximately 3×10^{20} eV. It was probably a cluster of 6 ultrahighenergy cosmic ray particles.
mv^{2} = (205,887*128^2*2 neutron mass * (2.807*c)^2) = 5×10^{19} eV
Expansion of the universe
From Wikipedia:Expansion of the universe
The expansion of the universe is the increase of the distance between two distant parts of the universe with time.
The expansion of space is often illustrated with conceptual models. In the "balloon model" a spherical balloon is inflated from an initial size of zero (representing the big bang).
From Wikipedia:Scale factor (cosmology)
Some insight into the expansion can be obtained from a Newtonian expansion model which leads to a simplified version of the Friedman equation. It relates the proper distance (which can change over time, unlike the comoving distance which is constant) between a pair of objects, e.g. two galaxy clusters, moving with the Hubble flow in an expanding or contracting FLRW universe at any arbitrary time $ t $ to their distance at some reference time $ t_0 $. The formula for this is:
 $ d(t) = a(t)d_0,\, $
where $ d(t) $ is the proper distance at epoch $ t $, $ d_0 $ is the distance at the reference time $ t_0 $ and $ a(t) $ is the scale factor. Thus, by definition, $ a(t_0) = 1 $.
The scale factor is dimensionless, with $ t $ counted from the birth of the universe and $ t_0 $ set to the present age of the universe: $ 13.799\pm0.021\,\mathrm{Gyr} $ giving the current value of $ a $ as $ a(t_0) $ or $ 1 $.
The evolution of the scale factor is a dynamical question, determined by the equations of general relativity, which are presented in the case of a locally isotropic, locally homogeneous universe by the ^{*}Friedmann equations.
The Hubble parameter is defined:
 $ H \equiv {\dot{a}(t) \over a(t)} $
where the dot represents a time derivative. From the previous equation $ d(t) = d_0 a(t) $ one can see that $ \dot{d}(t) = d_0 \dot{a}(t) $, and also that $ d_0 = \frac{d(t)}{a(t)} $, so combining these gives $ \dot{d}(t) = \frac{d(t) \dot{a}(t)}{a(t)} $, and substituting the above definition of the Hubble parameter gives $ \dot{d}(t) = H d(t) $ which is just Hubble's law.
The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's Law as follows:
 $ v = H_0 \, D $
where
 $ v $ is the recessional velocity, typically expressed in km/s.
 H_{0} is Hubble's constant and corresponds to the value of $ H $ (often termed the Hubble parameter which is a value that is ^{*}time dependent and which can be expressed in terms of the ^{*}scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript 0. This value is the same throughout the Universe for a given comoving time.
 $ D $ is the proper distance (which can change over time, unlike the comoving distance, which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3space defined by given cosmological time. (Recession velocity is just v = dD/dt).
Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted, and is not established except for small redshifts.
For distances D larger than the radius of the Hubble sphere r_{HS} , objects recede at a rate faster than the speed of light:
 $ r_{HS} = \frac{c}{H_0} \ . $
Its radius is the Hubble radius and its volume is the Hubble volume.
The Hubble constant $ H_0 $ has units of inverse time; the Hubble time t_{H} is simply defined as the inverse of the Hubble constant, i.e. $ t_H \equiv {1 \over H_0} = {1 \over 67.8\textrm{(km/s)/Mpc}} = 4.55\cdot 10^{17}\textrm{s} $ = 14.4 billion years. The Hubble time is the age it would have had if the expansion had been linear.
The value of the Hubble parameter changes over time, either increasing or decreasing depending on the value of the socalled deceleration parameter $ q $, which is defined by
 $ q = \left(1+\frac{\dot H}{H^2}\right). $
In a universe with a deceleration parameter equal to zero, it follows that H = 1/t, where t is the time since the Big Bang.
The age of the universe is thought to be 13.8 billion years.
1/13.8 billion years = 70.9 (km/s)/Mpc
Weather
A cold front is the leading edge of a cold dense mass of air, replacing (at ground level) a warmer mass of air. Like a hot air balloon, the warm air rises above the cold air. The rising warm air expands and therefore cools. This causes the moisture within it to condense into droplets and releases the latent heat of condensation which causes the warm air to rise even further. If the warm air is moist enough, rain can occur along the boundary. A narrow line of thunderstorms often forms along the front. Temperature changes across the boundary can exceed 30 °C (54 °F).
The polar front is a cold front that arises as a result of cold polar air meeting warm subtropical air at the boundary between the polar cell and the Ferrel cell in each hemisphere.
Earth's weather is driven by 2 main areas.
 The polar front.
 Extratropical cyclones form here and usually move eastward.
 The Intertropical Convergence Zone.
 Tropical cyclones form here and usually move westward.
 (See also: SPCZ & SACZ)
From Wikipedia:Aleutian Low: Cyclones (Hurricanes/Typhoons) that form in the tropical and equatorial regions of the Pacific normally start off by moving toward the west but can veer northward and get caught in the Aleutian Low where they become Extratropical cyclones which move toward the east. This is usually seen in the later summer seasons. Both the November 2011 Bering Sea cyclone and the November 2014 Bering Sea cyclone were posttropical cyclones that had dissipated and restrengthened when the systems entered the Aleutian Low region. The storms are remembered and marked as two of the strongest storms to impact the Bering Sea and Aleutian Islands with pressure dropping below 950mb in each system. The magnitude of the low pressure creates an extreme atmospheric disturbance, which can cause other significant shifts in weather. Following the November 2014 Bering Sea cyclone, a huge cold wave, November 2014 North American cold wave, hit the US bringing record breaking low temperatures to many states.
Extratropical cyclones, which form along the polar front, can become so large that they draw moisture up directly from the tropics in what is called an atmospheric River. (See the image to the right.) Atmospheric rivers are typically several thousand kilometers long and only a few hundred kilometers wide, and a single one can carry a greater flux of water than the Earth's largest river, the Amazon.^{[52]} The Amazon discharges more water into the oceans than the next 7 largest rivers. See Zipf's law. (Like many other rivers the Amazon river valley is an Aulacogen.)
Air at the equator (Intertropical Convergence Zone) normally travels toward the west but the IndoAustralian monsoon causes so much air to rise over the Maritime Continent (See Tropical Warm Pool) that between Africa and the Maritime Continent the wind reverses direction and equatorial air travels eastward from Africa toward the Maritime Continent. The South American monsoon has a similar effect over the Pacific ocean west of Brazil.
The South Pacific convergence zone (SPCZ) & South Atlantic convergence zone (SACZ) are Monsoon troughs that branch off the The Intertropical Convergence Zone (ITCZ) at the points where the IndoAustralian monsoon and the South American monsoon occur.
The InterOcean Convergence Zone has traditionaly been called the Congo air boundary. Also called the South Indian Ocean Convergence Zone (SIOCZ) and Oceanic Tropical Convergence Zone (OTCZ). See also Asymmetry of the Intertropical Convergence Zone
During an El Niño the South American monsoon is unusually strong and the IndoAustralian monsoon is weak. During an La Niña the opposite occurs.
During a La Nina, a double ITCZ sometimes forms in the eastern Pacific, with one located north and another south of the Equator, one of which is usually stronger than the other. When this occurs, a narrow ridge of high pressure forms between the two convergence zones.
The Madden–Julian oscillation is a traveling pattern that propagates eastward at approximately 4 to 8 m/s (14 to 29 km/h, 9 to 18 mph), through the atmosphere above the warm parts of the Indian and Pacific oceans. This overall circulation pattern manifests itself most clearly as anomalous rainfall. In the Pacific, strong MJO activity is often observed 6 – 12 months prior to the onset of an El Niño episode, but is virtually absent during the maxima of some El Niño episodes, while MJO activity is typically greater during a La Niña episode.
Tropical air is far warmer than air outside the tropics and therefore holds far more moisture and as a result thunderstorms in the tropics are much taller. Nevertheless severe thunderstorms are not common in the tropics because the storms own downdraft shuts off the inflow of warm moist air killing the thunderstorm before it can become severe. Severe thunderstorms tend to occur further north because of the polar jet stream. The jet stream pushes against the top of the thunderstorm displacing the downdraft so that it can no longer shut off the inflow of warm moist air. As a result severe thunderstorms can continue to feed and grow for many hours whereas normal thunderstorms only last 30 minutes.
Over a 30 minute period a normal thunderstorm releases 10^{15} Joules of energy equivalent to 0.24 megatons of TNT. A storm that lasted 24 hours would release 48 times as much energy (48 x 10^{15} Joules). A hurricane (a tropical cyclone) releases 52 x 10^{18} Joules/day equivalent to 1000 continuous thunderstorms.
The record lowest pressure established in the northern hemisphere is the extratropical cyclone of January 10, 1993 between Iceland and Scotland which deepened to a central pressure of 912915 mb (26.93”27.02”). Most hurricanes have an eye below 990 millibars. In 2005, hurricane WILMA reached the lowest barometric pressure ever recorded in an Atlantic Basin hurricane: 882 millibars. Hurricanes don't form in the South Atlantic.
If Earth's atmosphere were only slightly thicker then the air would be warmer and the amount of water vapor in the air would be much greater and lightning would therefore be much more common. The lightning would break apart the air molecules which would be washed down into the sea where they would end up in sediments which get subducted into the Earth. In this way the Earth's average air pressure is maintained at its current level.
During most of it's history Earth only had one atmospheric cell that extended from the pole to the equator and as a result Earth was very much warmer.
 Hadley cell
During an ice_age the Earth only has two cells. Ice ages are probably caused by deforestation caused by megafauna.
 Polar cell
 Ferrel cell
The Earth's atmosphere currently has 3 cells.
 Polar_cell
 Ferrel_cell
 Hadley_cell
Scale height is the increase in altitude for which the atmospheric pressure decreases by a factor of e. The scale height remains constant for a particular temperature. It can be calculated by
 $ H = \frac{kT}{Mg} $
 where:
 k = ^{*}Boltzmann constant = 1.38 x 10^{−23} J·K^{−1}
 T = mean atmospheric temperature in kelvins = 250 K for Earth
 M = mean mass of a molecule (units kg)
 g = acceleration due to gravity on planetary surface (m/s²)
Approximate atmospheric scale heights for selected Solar System bodies follow.
 Venus: 15.9 km
 Earth: 8.5 km
 Mars: 11.1 km
 Jupiter: 27 km
 Saturn: 59.5 km
 Titan: 21 km
 Uranus: 27.7 km
 Neptune: 19.1–20.3 km
 Pluto: ~60 km
If all of Earths atmosphere were at 1 bar then the atmosphere would be 8.5 km thick.
Life
 External link: Molecular biology of the cell
Did life begin with ^{*}nucleic acids or amino acids? Maybe it began with a molecule that was both a nucleic acid and an amino acid.
3Aminobenzoicacid:
Creating the monomers in the ^{*}Primordial soup is easy but getting the monomers to bond into a polymer is hard. So maybe it wasnt a polymer at all. Maybe it was a one dimensional liquid crystal. See ^{*}Mesogen.
Unexplained phenomena
Books published by ^{*}William R. Corliss include:
 Mysteries of the Universe (1967)
 Mysteries Beneath the Sea (1970)
 Strange Phenomena: A Sourcebook of Unusual Natural Phenomena (1974)
 Strange Artifacts: A Sourcebook on Ancient Man (1974)
 The Unexplained (1976)
 Strange Life (1976)
 Strange Minds (1976)
 Strange Universe (1977)
 Handbook of Unusual Natural Phenomena (1977)
 Strange Planet (1978)
 Ancient Man: A Handbook of Puzzling Artifacts (1978)
 Mysterious Universe: A Handbook of Astronomical Anomalies (1979)
 Unknown Earth: A Handbook of Geological Enigmas (1980)
 Incredible Life: A Handbook of Biological Mysteries (1981)
 The Unfathomed Mind: A Handbook of Unusual Mental Phenomena (1982)
 Lightning, Auroras, Nocturnal Lights, and Related Luminous Phenomena (1982)
 Tornados, Dark Days, Anomalous Precipitation, and Related Weather Phenomena (1983)
 Earthquakes, Tides, Unidentified Sounds, and Related Phenomena (1983)
 Rare Halos, Mirages, Anomalous Rainbows, and Related Electromagnetic Phenomena (1984)
 The Moon and the Planets (1985)
 The Sun and Solar System Debris (1986)
 Stars, Galaxies, Cosmos (1987)
 Carolina Bays, Mima Mounds, Submarine Canyons (1988)
 Anomalies in Geology: Physical, Chemical, Biological (1989)
 Neglected Geological Anomalies (1990)
 Inner Earth: A Search for Anomalies (1991)
 Biological Anomalies: Humans I (1992)
 Biological Anomalies: Humans II (1993)
 Biological Anomalies: Humans III (1994)
 Science Frontiers: Some Anomalies and Curiosities of Nature (1994)
 Biological Anomalies: Mammals I (1995)
 Biological Anomalies: Mammals II (1996)
 Biological Anomalies: Birds (1998)
 Ancient Infrastructure: Remarkable Roads, Mines, Walls, Mounds, Stone Circles: A Catalog of Archeological Anomalies (1999)
 Ancient Structures: Remarkable Pyramids, Forts, Towers, Stone Chambers, Cities, Complexes: A Catalog of Archeological Anomalies (2001)
 Remarkable Luminous Phenomena in Nature: A Catalog of Geophysical Anomalies (2001)
 Scientific Anomalies and other Provocative Phenomena (2003)
 Archeological Anomalies: Small Artifacts (2003)
 Archeological Anomalies: Graphic Artifacts I (2005)
Psychology
 Fear is like dirt and it washes right off.
 You desire things because they are desirable. You crave things because you think they are (infinitely) forbidden.
 Humans reason. Animals project.
 Con men (confidence men) have the power to make themselves believe things that they know are not true.
From Wikipedia:Myers–Briggs Type Indicator
Jung's typological model regards psychological type as similar to left or right handedness: people are either born with, or develop, certain preferred ways of perceiving and deciding. The MBTI sorts some of these psychological differences into four opposite pairs, or "dichotomies", with each pair being associated with a basic psychological drive:
Curiosity (how):
 Sensing/Intuition
Time (when):
Empathy (what):
 Thinking/Feeling
Sympathy (why):
 Perception/Judging
Sensing types develop strong beliefs based on information that is in the present, tangible, and concrete: that is, empirical information that can be understood by the five senses. They tend to distrust hunches, which seem to come "out of nowhere".
Intuition types tend to be more interested in the underlying reality than in superficial appearance.
Extraverted types recharge and get their energy from spending time with people.
Introverted types recharge and get their energy from spending time alone
 An ambivert is both intraverted and extroverted.
Thinking types tend to decide things from a more detached standpoint, measuring the decision by what seems reasonable, logical, causal, consistent, and matching a given set of rules.
Feeling types tend to come to decisions by associating or empathizing with the situation, looking at it 'from the inside' and weighing the situation to achieve, on balance, the greatest harmony, consensus and fit, considering the needs (and egos) of the people involved.
 A hermaphrodite is both Feeling and Thinking
Perception types like to "keep their options open". In other words they are willing to cheat whenever others aren't looking and are uncomfortable in an environment in which cheating is looked down on.
Judging types are more comfortable with a structured environment. One that is planned and organized, rational and reasonable. An environment in which everyone can get their fair share. An environment in which cheating is not permitted or is strongly discouraged.
 ^{*}Autistic  neither Intuition nor Sensing
 ^{*}Aspergers  neither Introverted nor Extraverted
 ^{*}Schizoid  neither Feeling nor Thinking
 ^{*}Schizophrenic  neither Perception nor Judging
Forward and backward thinking
When we use forwardthinking we start with a goal and ask what actions we need to perform to achieve that goal.
 For example: I need groceries therefore I need to drive to the Shopping Center.
When we use backwardthinking (usually called lateral thinking) we start with an action and ask what goal it could be part of.
 For example: Since I am already at the shopping center now would be a good time to get groceries.
Most people can easily do forward thinking but you almost have to be a Sherlock Holmes to do backward thinking painlessly.
REM Sleep
Animals that are allowed to get deep sleep but prevented from getting REM sleep die. Even schizoids require a little bit of REM sleep. ^{*}Death by sleep deprivation was a long slow and painful way to die.
It is thought that sleep allows the brain to get rid of waste products which then pass through the kidneys and are eliminated by urination. In effect, the brain is urinating while we dream.
Civilization and domestication
Big cats chase down and strangle their prey which die quickly. Nature's dirty little secret is that with other animals this is not always the case. Animals like wolves just don't have the tools necessary to kill large prey before they eat them. So they don't. They just start eating. This is called "kill by consumption" and the victim can take days to die.
Undomesticated animals cannot be tamed. Never turn your back on an undomesticated animal.
Animals, like birds and mammals, that bear young that are incapable of fending for themselves have evolved to feel empathy for their young. The young themselves have, in turn, evolved to become cute and harmless so that the mother will care even more for them. But they lose that cuteness and harmlessness when they reach puberty. Domesticating animals is a matter of breeding animals so that they retain that cuteness into adulthood. See ^{*}Neoteny.
From Wikipedia:Selfdomestication#Primates_(Humans)
Gregory Stock, director of the UCLA School of Medicine's Program of Medicine, Technology and Society, describes human selfdomestication as a process which "... mirrors our domestication [of animals] ... we have transformed ourselves through a similar process of selfselection."
A civilized society is a society whose laws dont favor any one person (like an allpowerful and allseeing totalitarian leader) or any one group of people. The more a society treats everyone equally the more civilized it is. But treating everyone equally is not the same thing as treating everyone the same. Introverts, for example, dont want to be treated the same way that extroverts want to be treated.
United States Declaration of Independence:
We hold these truths to be self evident, that allmen are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
Camping
Camping advice 

External link: https://wikihow.com/SurviveintheWoods, Primitive Technology Tell others where you are going and when to expect you back. You Can Suvive Three Minutes Without Oxygen. The general rule of thumb is to carry no more than a third of your body weight. 30 pounds is a good target for most people. Soldiers can carry from 4090+ pounds. A sleeping bag will weigh about 5 pounds. Stay warm and dry. Do not sleep in a position that is exposed to the cold night sky. You will wake up covered in dew. A compass can be useful if you know ahead of time what direction you need to go to find civilization (or a road or trail leading to civilization). Do your research before you leave. Three of anything in the wilderness is a standard distress signal. A crude shelter can be made by slinging a tarp over a rope stretched between two trees. But rain will tend to run down the rope and drip into your shelter. A simple knot in the Rope can prevent this. Beware: a fire that appears to be out can smolder for a surprisingly long time. See How to put out a Campfire. You might want to consider using a campfire tripod to cook with. Some grains can be boiled in a bag. You need 2 liters (2 kg or 4.4 pounds) of water a day. You also need 50 grams of protein a day. Warm blooded animals require 10 times as much food as cold blooded animal. 90% of your calories go to keeping you warm.
An average man weighs 200 pounds. An average women weighs 170 pounds.
If you are hiking in on foot then reducing weight is the main consideration. Dry food is much lighter and doesn't require refrigeration but you must find a source of usable water at the campsite. Tallow (fat) doesn't require refrigeration either. Native Americans would add tallow to dried food to make ^{*}Pemmican. See comparingemergencyrationbars. Modern shortening (crisco) has largely replaced tallow. Some high calorie foods (calories per pound):
Light weight but very useful:
A typical cellphone has enough power to reach a cell tower up to 45 miles away. Depending on the technology of the cellphone network, the maximum distance may be as low as 22 miles because the signal otherwise takes too long for the highly accurate timing of the cellphone protocol to work reliably. Radio transmission is line of sight. Trees do a very good job of absorbing radio waves. Small minnows (under 2 inches or 5 cm) can be deep fried and eaten whole. Coat with flour and fry 2 minutes. They can also be boiled. An 8 oz, 4 foot by 4 foot, minnow seine net can be had for only 5 dollars. Crayfish can be cooked in a rolling boil for 2 minutes then eaten. One pound of rabbit meat has 517 calories and 100 grams of protein. Squirrel meat is about the same. From Wikipedia:Urophagia Survival guides such as the US Army Field Manual, The SAS Survival Handbook, and others generally advise against drinking urine for survival even when there is no other fluid available. These guides state that drinking urine tends to worsen, rather than relieve dehydration due to the salts in it. Plunged into freezing seas, around 20% of victims die within two minutes from cold shock (uncontrolled rapid breathing, and gasping, causing water inhalation, massive increase in blood pressure and cardiac strain leading to cardiac arrest, and panic); another 50% die within 15–30 minutes from cold incapacitation (inability to use or control limbs and hands for swimming or gripping, as the body "protectively" shuts down the peripheral muscles of the limbs to protect its core). Exhaustion and unconsciousness cause drowning, claiming the rest within a similar time. 
2 kinds of qualia
Consciousness is being aware of being aware. Being aware means knowing what is happening. Computers know how to do things but don't yet know what they are doing.
Qualia are deeply mystifying. It is very hard to imagine how electrical signals passing through the microtubules of the brain could possibly produce something like the perception of colors.
But imagine a computer that knows what it is doing that is hooked up to a camera. Imagine that the computer is able to identify objects and intelligent enough to answer questions about what it is seeing. Obviously it must be perceiving some sort of sensation.
But that sensation would be like our perception of black and white. It would just be information. It would be devoid of beauty. It would not be like our perception of beautiful colors like yellow red or blue (which are pleasant beautified versions of white, grey, and black)
The computer would live in a world without beauty or pleasure. But it would also live in a world without pain. It's hard to tell whether one should feel sorry for it or envy it, especially when one considers how much time and energy we spend doing stuff we hate in order to avoid something we hate even more. Without beauty or pleasure the computer wouldn't know why it was doing what it was doing.
The forebrain determines what to do. The cerebellum determines how (and when) to do it. But it is the midbrain that determines why things should be done in the first place. To teach a computer "why" it may be necessary to give the computer a midbrain.
If the cerebellum is our hands then the midbrain is our eyes. The midbrain is responsible for drawing our attention toward the things that most need our attention (in the same way that animals are drawn toward food). If our brains functioned the way they should we would see a black and white image of the world with only those parts that needed our attention colored yellow, red, or blue by the midbrain. But our brains don't function the way they should. Instead, we see a fullcolor image all the time and as a result we must live without the help and guidance of our midbrain. We must find our own way. And sometimes we can't do it. Its the ultimate gilded cage.
So the midbrain is responsible for feelings of pain and pleasure. Bear in mind that there are two kinds of pain. Scary pain and nonscary pain. The difference is not intensity. They are two fundamentally different sensations. Some people do not experience scary pain. These people give the impression of not being afraid of anything. Fear of heights is actually fear of pain.
To help us avoid pain our cerebellum is constantly running simulations to see what is about to happen and to see what effect various actions we could take would have.
So we are attracted toward pleasant things and repelled from unpleasant things. If our brains functioned the way that they should then we would only experience pain or pleasure when the path forward was clear. But we experience pain even when there is nothing we can do about it.
Cargo cult science
From Wikipedia:Cargo cult science
Cargo cult science is a phrase describing practices that have the semblance of being scientific, but do not in fact follow the scientific method.
Cargo cults are religious practices that have appeared in many traditional tribal societies in the wake of interaction with technologically advanced cultures. They focus on obtaining the material wealth (the "cargo") of the advanced culture by imitating the actions they believe cause the appearance of cargo: by building landing strips, mock aircraft, mock radios, and the like. Similarly, Cargo cult sciences employ the trappings of the scientific method, but like an airplane with no motor—these cargo cult sciences fail to deliver anything of value.
From the book Surely You're Joking, Mr. Feynman!.
In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he's the controller—and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land.
Feynman cautioned that to avoid becoming cargo cult scientists, researchers must avoid fooling themselves, be willing to question and doubt their own theories and their own results, and investigate possible flaws in a theory or an experiment. He recommended that researchers adopt an unusually high level of honesty which is rarely encountered in everyday life.
The history of published results for the Millikan Oil drop experiment is an example given in Surely You're Joking, Mr. Feynman!, in which each new publication slowly and quietly drifted more and more away from the initial (erroneous) values given by Robert Millikan toward the correct value, rather than all having a random distribution from the start around what is now believed to be the correct result. This slow drift in the chronological history of results is unnatural and suggests that nobody wanted to contradict the previous one, instead submitting only concordant results for publication.
Memory
 See also: ^{*}Method of loci
Memorizing a fact is easier if you can associate the fact with some abstract imagery. The more bizarre, outlandish, or even ridiculous the imagery the easier it is to remember the fact. This no doubt explains much of the imagery of mythology.
Those who cant remember mythology are doomed to repeat it.
Fascism
 The enemy of my enemy is my enemy
From Wikipedia:Fascism
Fascism is a radical, authoritarian or totalitarian nationalist or ultranationalist political ideology. Fascists paradoxically promote violence and war as actions that create positive transformation in society. Fascists exalt militarism as providing national regeneration, spiritual renovation, vitality, education, instilling of a will to dominate in people's character, and creating national comradeship through military service. Fascists view conflict as an inevitable fact of life that is responsible for all human progress
Ultimately, it is easier to define fascism by what it is against than by what it is for. Fascism is antianarchist, anticommunist, anticonservative, antidemocratic, antiindividualist, antiliberal, antiparliamentary, antibourgeois, and antiproletarian. It entails a distinctive type of anticapitalism and is typically, with few exceptions, anticlerical. Fascism rejects the concepts of egalitarianism, materialism, and rationalism in favour of action, discipline, hierarchy, spirit, and will. In economics, fascists oppose liberalism (as a bourgeois movement) and Marxism (as a proletarian movement) for being exclusive economic classbased movements.
Indeed, fascism is perhaps best described as "antiism"; that is, the philosophy of being against everyone and everything all of the time. The only place where fascism makes any sense is bootcamp. But if fascists had their way they would turn the entire world into one big neverending boot camp
Hitler
See also: ^{*}Cult of personality
A report^{[53]}^{[54]} prepared during the war by the United States Office of Strategic Services describing Hitler's psychological profile states:
He has been able, in some manner or other, to unearth and apply successfully many factors pertaining to group psychologyCapacity to appeal to the most primitive, as well as the most ideal inclinations in man, to arouse the basest instincts and yet cloak them with nobility, justifying all actions as means to the attainment of an ideal goal.
Appreciation of winning confidence from the people by a show of efficiency within the organization and government. It is said that foods and supplies are already in the local warehouses when the announcement concerning the date of distribution is made. Although they could be distributed immediately the date is set for several weeks ahead in order to create an impression of superefficiency and win the confidence of the people. Every effort is made to avoid making a promise which cannot be fulfilled at precisely the appointed time
Hitler's ability to repudiate his own conscience in arriving at political decisions has eliminated the force which usually checks and complicates the forwardgoing thoughts and resolutions of most socially responsible statesmen. He has, therefore, been able to take that course of action which appeals to him as most effective without pulling his punches. The result has been that he has frequently outwitted his adversaries and attained ends which would not have been as easily attained by a normal course. Nevertheless, it has helped to build up the myth of his infallibility and invincibility.
Equally important has been his ability to persuade others to repudiate their individual consciences and assume that role himself. He can then decree for the individual what is right and wrong, permissible or impermissible and can use them freely in the attainment of his own ends. As Goering has said: "I have no conscience. My conscience is Adolph Hitler."
This has enabled Hitler to make full use of terror and mobilize the fears of the people which he evaluated with an almost uncanny precision.
His primary rules were: never allow the public to cool off; never admit a fault or wrong; never concede that there may be some good in your enemy; never leave room for alternatives; never accept blame; concentrate on one enemy at a time and blame him for everything that goes wrong; people will believe a big lie sooner than a little one; and if you repeat it frequently enough people will sooner or later believe it.
World Empires  

From Wikipedia:Ox Oxen are thought to have first been harnessed and put to work around 4000 BC. From Wikipedia:Domestication of the horse How and when horses became domesticated is disputed. The clearest evidence of early use of the horse as a means of transport is from chariot burials dated c. 2000 BCE. The New Kingdom pharaohs from 1549 to 1069 established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen. Egypt's wealth, however, made it a tempting target for invasion. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. From Wikipedia:Carthaginian Iberia The Phoenicians founded the city of Carthage in 814. Carthage annexed territory in Sicily, Africa, Sardinia and in 575 , they created colonies on the Iberian peninsula. The first dynasty of the Persian Empire was created by Achaemenids, established by Cyrus the Great in 550 BC. From Wikipedia:Alexander the Great Alexander the Great, was a king of the ancient Greek kingdom of Macedon and a member of the Argead dynasty. In 334, he invaded the Achaemenid Empire (Persian Empire) and overthrew Persian King Darius III and conquered the Achaemenid Empire in its entirety. He invaded India in 326. Chandragupta Maurya raised an army, with the assistance of Chanakya, and overthrew the Nanda Empire in c. 322. Chandragupta rapidly expanded his power westwards across central and western India by conquering the satraps left by Alexander the Great, and by 317 the empire had fully occupied Northwestern India. The Mauryan Empire then defeated Seleucus I, founder of the Seleucid Empire, during the Seleucid–Mauryan war, thus gained additional territory west of the Indus River. Taking indirect evidence into account from the work of the Greek technician Apollonius of Perge, the British historian of technology M.J.T. Lewis dates the appearance of the verticalaxle watermill to the early 3rd century BCE, and the horizontalaxle watermill to around 240 BC, with Byzantium and Alexandria as the assigned places of invention From Wikipedia:Punic Wars The Punic (Phoenician) Wars were a series of three wars fought between Rome and Carthage from 264 to 146. Rome conquered Carthage's empire, completely destroyed the city, and became the most powerful state of the Western Mediterranean. From Wikipedia:Byzantine Empire Theodosius I (379–395) was the last Emperor to rule both the Eastern and Western halves of the Empire. He issued a series of edicts essentially banning pagan religion. To fend off the Huns, Theodosius had to pay an enormous annual tribute to Attila. From Wikipedia:Huns By 430 the Huns had established a vast, if shortlived, dominion in Europe. After Attila's death in 453, the Hunnic Empire collapsed, and many of the remaining Huns were often hired as mercenaries by Constantinople. Roman historian Procopius of Caesarea, related the Huns of Europe with the Hephthalites or "White Huns" who subjugated the Sassanids and invaded northwestern India. He contrasted the Huns with the Hephthalites, in that the Hephthalites were sedentary, whiteskinned, and possessed "not ugly" features. In 476 Romulus, the last of the Roman emperors in the west, was overthrown by the Germanic leader Odoacer, who became the first Barbarian to rule in Rome. The Slavs under name of the Antes and the Sclaveni make their first appearance in the early 500s emerging from the area of the Carpathian Mountains, the lower Danube and the Black Sea.^{[55]} From Wikipedia:Early Muslim conquests In late 620s Muhammad had already managed to conquer and unify much of Arabia under Muslim rule. Muhammad died in 632. From Wikipedia:Rashidun Caliphate The Rashidun Caliphate (632–661) was the first of the four major caliphates. The Rashidun Caliphate is characterized by a twentyfive year period of rapid military expansion, followed by a fiveyear period of internal strife. It was ruled by the first four successive caliphs. Shia Muslims do not consider the rule of the first three caliphs as legitimate. The Muslim conquest of Persia, also known as the Arab conquest of Iran, led to the end of the Sasanian Empire in 651 and the eventual decline of the Zoroastrian religion in Iran (Persia).^{[56]} From Wikipedia:Umayyad Caliphate The Umayyad Caliphate (661) was the second of the four major caliphates. The Umayyads continued the Muslim conquests incorporating the Transoxiana, Sindh, the Maghreb and the Iberian Peninsula (AlAndalus) into the Muslim world. Christian and Jewish population still had autonomy. From Wikipedia:Khazars The Khazars were a seminomadic Turkic people. Khazaria long served as a buffer state between the Byzantine Empire and both the nomads of the northern steppes and the Umayyad Caliphate.
The ruling elite of the Khazars were said by Judah Halevi and Abraham ibn Daud to have converted to Rabbinic Judaism in the 700s.
Leo the Deacon, a Byzantine historian and chronicler, refers to the Rus' as "Scythians" and notes that they tended to adopt Greek rituals and customs. From Wikipedia:Battle of Tours In October 732, the army of the Umayyad Caliphate led by Al Ghafiqi met Frankish and Burgundian forces under Charles Martel in an area between the cities of Tours and Poitiers (modern northcentral France), leading to a decisive, historically important Frankish victory known as the Battle of Tours. From Wikipedia:Abbasid Caliphate The Abbasid Caliphate (750) was the third of the Islamic caliphates. Abu al'Abbas asSaffah defeated the Umayyads in 750. Immediately after their victory, AsSaffah sent his forces to Central Asia, where his forces fought against Tang dynasty expansion during the Battle of Talas. Under AlMansur the empire's capital was moved from Damascus, in Syria, to Baghdad. Eventually they were forced to cede authority over AlAndalus (Spain) and the Maghreb (Northwest Africa) to the Umayyads. Abbasid leadership over the vast Islamic empire was gradually reduced to a ceremonial religious function. Emperor Charlemagne (800814) united much of western and central Europe during the early Middle Ages. He was the first recognized emperor to rule from western Europe since the fall of Rome. The Holy Roman Empire of the German Nation began with Otto I in 962. 
Imperialism 

The Mongol Empire (1206–1368) emerged from the unification of several nomadic tribes in the Mongol homeland under the leadership of Genghis Khan. Gunpowder spread rapidly throughout the Old World as a result of the Mongol conquests during the 1200s, with written formula for it appearing in the 1267 Opus Majus treatise by Roger Bacon. Dum Diversas (English: Until different) is a papal bull issued on 18 June 1452 by Pope Nicholas V. It authorized Afonso V of Portugal to conquer Saracens and pagans and consign them to "perpetual servitude". We grant you [Kings of Spain and Portugal] by these present documents, with our Apostolic Authority, full and free permission to invade, search out, capture, and subjugate the Saracens and pagans and any other unbelievers and enemies of Christ wherever they may be, as well as their kingdoms, duchies, counties, principalities, and other property [...] and to reduce their persons into perpetual servitude. From Wikipedia:First wave of European colonization The first European colonization wave took place from the early 1400s until the early 1800s, and primarily involved the European colonization of the Americas, though it also included the establishment of European colonies in India and in Maritime Southeast Asia. During this period, European interests in Africa primarily focused on the establishment of trading posts there, particularly for the African slave trade. Turkic peoples migrated west from Turkestan and Mongolia towards Eastern Europe, the Iranian plateau and Anatolia (modern Turkey) in many waves. The date of the initial expansion remains unknown. After many battles, they established their own state and later constructed the Ottoman Empire. The Ottoman Empire controlled much of southeastern Europe, western Asia and northern Africa between the 1300s and early 1900s. The Ottomans ended the Byzantine Empire with the 1453 conquest of Constantinople. The Protestant Reformation is usually considered to have started with the publication of the Ninetyfive Theses by Martin Luther in 1517. From Wikipedia:Ivan the Terrible On 16 January 1547, at age sixteen, Ivan the terrible was crowned as "Tsar of All the Russias". The British East India Company received a Royal Charter from Queen Elizabeth I on 31 December 1600. The Dutch East India Company, was a publicly tradable corporation that was founded in 1602. The French East India Company was founded in 1664. From Wikipedia:Glorious Revolution The Glorious Revolution, also called the Revolution of 1688, was the overthrow of King James II of England. James's overthrow began modern English parliamentary democracy: the Bill of Rights 1689 has become one of the most important documents in the political history of Britain and never since has the monarch held absolute power. The Russian Empire was an empire that existed across Eurasia and North America following the end of the Great Northern War (1700–1721) against the Swedish Empire. 
Industrial revolution 

From Wikipedia:Industrial Revolution and Wikipedia:Watt steam engine The Industrial Revolution began in Great Britain, and many of the technological innovations were of British origin. The Watt steam engine, developed sporadically from 1763 to 1775, was a key point in the Industrial Revolution. Watt's two most important improvements were the separate condenser and rotary motion. Wealth was no longer a matter of owning land or subjugating vassals. Now, the best way to create wealth was to create a thriving economy. This is I believe the real reason monarchies were replaced with democracies. From Wikipedia:American Revolutionary War The American Revolutionary War (1775–1783), also known as the American War of Independence, was a global war that began as a conflict between Great Britain and its Thirteen Colonies which declared independence as the United States of America. France formally allied with the Americans and entered the war in 1778, and Spain joined the war the following year as an ally of France. From Wikipedia:French Revolution The French Revolution (17891799) was a period of farreaching social and political upheaval in France and its colonies. The Revolution overthrew the monarchy, established a republic, catalyzed violent periods of political turmoil, and finally culminated in a dictatorship under Napoleon who brought many of its principles to areas he conquered in Western Europe and beyond. Inspired by liberal and radical ideas, the Revolution profoundly altered the course of modern history, triggering the global decline of absolute monarchies while replacing them with republics and liberal democracies. Through the Revolutionary Wars, it unleashed a wave of global conflicts that extended from the Caribbean to the Middle East. Historians widely regard the Revolution as one of the most important events in human history. The term Bohemianism emerged in France in the early 1800s when artists and creators began to concentrate in the lowerrent, lower class, Romani neighborhoods. Bohémien was a common term for the Romani people of France (widely known by the exonym Gypsies because they were originally thought to have come from Egypt), who were mistakenly thought to have reached France in the 1400s via Bohemia (the western part of modern Czech Republic). Literary "Bohemians" were associated in the French imagination with roving Romani people, outsiders apart from conventional society and untroubled by its disapproval. The term carries a connotation of arcane enlightenment, and carries a less frequently intended, pejorative connotation of carelessness about personal hygiene and marital fidelity. Emperor Francis II dissolved the Holy Roman Empire of the German Nation on 6 August 1806, after the creation of the Confederation of the Rhine by Napoleon. From Wikipedia:Confederation of the Rhine The Confederation of the Rhine was a confederation of client states of the First French Empire. It was formed initially from 16 German states by Napoleon after he defeated Austria and Russia at the Battle of Austerlitz. The allies opposing Napoleon dissolved the Confederation of the Rhine on 4 November 1813. Most members of the Confederation of the Rhine along with Prussia and Austria formed the German Confederation. From Wikipedia:German Confederation The German Confederation was an association of 39 German states in Central Europe, created by the Congress of Vienna in 1815. 1830s–1860s: Enormous railway building booms in the United States. The Communist Manifesto by German philosophers Karl Marx and Friedrich Engels is published in London just as the revolutions of 1848 begin to erupt. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life by Charles Darwin was published on 24 November 1859. 1859 : The first proper traction engine, in the form recognisable today, was developed. The first half of the 1860s was a period of great experimentation but by the end of the decade the standard form of the traction engine had evolved and would change little over the next sixty years. It was widely adopted for agricultural use. 1861: The southern states secede. The next year the Pacific Railroad Act is passed "to aid in the construction of a [transcontinental] railroad and telegraph line from the Missouri river to the Pacific ocean, and to secure to the government the use of the same for postal, military, and other purposes". Lebensreform ("life reform") was a social movement in late 1800s and early 1900s Germany and Switzerland that propagated a backtonature lifestyle, emphasizing among others health food/raw food/organic food, nudism, sexual liberation, alternative medicine, and religious reform and at the same time abstention from alcohol, tobacco, drugs, and vaccines. Some practitioners emigrated to California and directly influenced the hippie movement. The German Confederation ended as a result of the AustroPrussian War of 1866 between Austrian Empire and its allies on one side and the Kingdom of Prussia and its allies on the other. Prussia and its allies created the North German Confederation in 1867. The North German Confederation Reichstag and Bundesrat accepted to rename the North German Confederation as the German Empire and give the title of German Emperor (Kaiser) to the King of Prussia on 1 January 1871. In 1881 Nietzsche begins publishing his most well known books.
Sigmund Freud qualified as a doctor of medicine at the University of Vienna. From Wikipedia:Unification of Germany The unification of Germany into a politically and administratively integrated nation state officially occurred on 18 January 1871. From Wikipedia:German colonial empire German Colonial efforts only began in 1884 with the Scramble for Africa. Claiming much of the leftover colonies that were yet unclaimed in the Scramble of Africa, Germany managed to build the third largest colonial empire after the British and the French, at the time. From Wikipedia:New Imperialism The period featured an unprecedented pursuit of overseas territorial acquisitions. At the time, states focused on building their empires with new technological advances and developments, making their territory bigger through conquest, and exploiting their resources. During the era of New Imperialism, the Western powers (and Japan) individually conquered almost all of Africa and parts of Asia. By 1914, 90 percent of Africa was under European control; with only Ethiopia (Abyssinia), the Dervish state (a portion of presentday Somalia) and Liberia still being independent. 
World War I 

World War I began 28 July 1914.
From Wikipedia:Russian Revolution In February 1917 (March in the Gregorian calendar) the Russian Empire collapsed with the abdication of Emperor Nicholas II and the old regime was replaced by the Russian Provisional Government which was heavily dominated by the interests of large capitalists and the noble aristocracy. The February Revolution took place in the context of heavy military setbacks which left much of the Russian Army in a state of mutiny. When the Provisional Government chose to continue fighting the war with Germany, the Bolsheviks and other socialist factions were able to exploit virtually universal disdain towards the war effort as justification to advance the revolution further.
In the October Revolution (November in the Gregorian calendar), the Bolsheviks led an armed insurrection by workers and soldiers in Petrograd that successfully overthrew the Provisional Government, transferring all its authority to the soviets with the capital being relocated to Moscow shortly thereafter. The promise to end Russia’s participation in the First World War was honored promptly. From Wikipedia:German Revolution of 1918–19 In the face of defeat, the German Naval Command insisted on trying to precipitate a climactic battle with the British Royal Navy by means of its naval order of 24 October 1918. The battle never took place. Instead of obeying their orders to begin preparations to fight the British, German sailors led a revolt in the naval ports of Wilhelmshaven on 29 October 1918, followed by the Kiel mutiny in the first days of November. These disturbances spread the spirit of civil unrest across Germany and ultimately led to the proclamation of a republic on 9 November 1918. Shortly thereafter, Emperor Wilhelm II abdicated his throne and fled the country. The function of head of state was succeeded by the President of the Reich. An armistice with Germany was signed in a railroad carriage at Compiègne. At 11 am on 11 November 1918—"the eleventh hour of the eleventh day of the eleventh month"—a ceasefire came into effect. The Treaty of Versailles ended the state of war between Germany and the Allied Powers. It was signed on 28 June 1919 in Versailles. Poland regained its independence with the Treaty of Versailles. From Mien Kampf (fair use): There has always been one possible way, and one only, of making weak or wavering men, or even downright poltroons, face their duty steadfastly. This means that the deserter must be given to understand that his desertion will bring upon him just the very thing he is flying from. At the Front a man may die, but the deserter MUST die. Only this draconian threat against every attempt to desert the flag can have a terrifying effect, not merely on the individual but also on the mass. 
From Wikipedia:Political views of Adolf Hitler
After World War I, Hitler stayed in the army, which was mainly engaged in suppressing socialist uprisings across Germany, including in Munich, where Hitler returned in 1919. In July 1919 Hitler was appointed Verbindungsmann (intelligence agent) of an Aufklärungskommando (reconnaissance commando) of the Reichswehr, both to influence other soldiers and to infiltrate the German Workers' Party (DAP). Much like the political activists in the DAP, Hitler blamed the loss of the First World War on Jewish (i.e. Bolsheviks) intrigue at home and abroad. Hitler became impressed with founder Anton Drexler's antisemitic, nationalist, anticapitalist and antiMarxist ideas.
On the orders of his army superiors, Hitler applied to join the party, and within a week was accepted. Hitler was discharged from the army on 31 March 1920 and began working fulltime for the party. Displaying his talent for oratory and propaganda skills, with the support of Drexler, Hitler became chief of propaganda for the party in early 1920. Party members promulgated their 25point manifesto on 24 February 1920 (coauthored by Hitler, Anton Drexler, Gottfried Feder, and Dietrich Eckart). At the same time the party changed its name to the National Socialist German Workers' Party (NSDAP), commonly known as the Nazi Party.
25point manifesto 

From Wikipedia:National Socialist Program

Brownshirts
From Wikipedia:Sturmabteilung, Wikipedia:Stennes Revolt, and Wikipedia:Beefsteak Nazi
The Sturmabteilung (SA), literally Storm Battalion (i.e. stormtroopers), functioned as the original paramilitary wing of the Nazi Party. The SA developed by organizing and formalizing the groups of exsoldiers and beer hall brawlers. It played a significant role in ^{*}Adolf Hitler's rise to power in the 1920s and 1930s. Its primary purposes were providing protection for Nazi rallies and assemblies, disrupting the meetings of opposing parties, fighting against the paramilitary units of the opposing parties, especially the Red Front Fighters League of the Communist Party of Germany, and intimidating Slavs, Romanis, trade unionists, and, especially, Jews – for instance, during the Nazi boycott of Jewish businesses. The SA were also called the "Brownshirts" (Braunhemden) from the color of their uniform shirts.
In 1922, the Nazi Party created a youth section, the Jugendbund (youth band), for young men between the ages of 14 and 18 years. Its successor, the Hitler Youth (Hitlerjugend or HJ), remained under SA command until May 1932.
In the second half of 1922 Hyperinflation caused many personal fortunes to be rendered worthless. When the German government failed to meet its reparations payments and French troops marched in to occupy the industrial areas along the Ruhr in January 1923, widespread civil unrest was the result. By November 1923, the US dollar was worth 4,210,500,000,000 German marks. French and British economic experts began to claim that Germany deliberately destroyed its economy to avoid war reparations.
The book, “Adolf Hitler: His Life and His Speeches,” by Baron Adolf Victor von Koerber was published in early fall of 1923. The book compares Hitler to Jesus, likening his moment of politicization to Jesus’ resurrection and using terms such as ‘holy’ and ‘deliverance’. It also argues that it should become ‘the new bible of today’. It is now suspected that Hitler himself wrote the book.
The Beer Hall Putsch was a failed coup attempt on 8–9 November 1923 by the Nazi Party leader Adolf Hitler and other ^{*}Kampfbund leaders, including Gregor Strasser a regional head of the SA in Lower Bavaria, to seize power in Munich, Bavaria. Röhm, Hitler, General Erich Ludendorff, Lieutenant Colonel Hermann Kriebel and six others were tried in February 1924 for high treason. Röhm was found guilty and sentenced to a year and three months in prison, but the sentence was suspended and he was granted a conditional discharge. After a few weeks in prison Strasser was released because he had been elected a member of the Bavarian Landtag
While Hitler was in prison (and writing ^{*}Mein Kampf), Ernst Röhm helped to create the Frontbann as a legal alternative to the thenoutlawed SA. At Landsberg prison in April 1924, Röhm had also been given authority by Hitler to rebuild the SA in any way he saw fit. Hitler was released on parole on 20 December 1924 and Hess ten days later. The ban on the NSDAP and SA was lifted in February 1925. When in April 1925 Hitler and Ludendorff disapproved of the proposals under which Röhm was prepared to integrate the 30,000strong Frontbann into the SA, Röhm resigned.
Beefsteak Nazi was a term used in Nazi Germany to describe Communists and Socialists who joined the Nazi Party. These individuals were like a 'beefsteak' – brown on the outside and red on the inside. The term was particularly used for workingclass members of the Sturmabteilung (SA) who were aligned with Strasserism.
As a former Marxist in his early years, Goebbels once stated "how thin the dividing line" was between communism and National Socialism, which had caused many Red Front Fighters to "switch to the SA". Goebbels expressed that sentiment in a 1925 public speech, declaring that "the difference between Communism and the Hitler faith is very slight".
The Mueller government imploded in late March 1930. Its successor, the Bruening government, was unable to obtain a parliamentary majority. Members of the SA in Berlin, led by Stennes, had for some time been voicing objections to the policies and purposes of the SA, as defined by Hitler. These SA members saw their organization as a revolutionary group, the vanguard of a nationalsocialist order that would overthrow the hated Republic by force. Stennes complained that advancement within the SA was improperly based upon cronyism and favoritism rather than upon merit. He objected to the general lawabiding approach that Adolf Hitler had adopted after the Beer Hall Putsch, and he and his men chafed under the Hitlerian order to terminate street attacks upon Communists and Jews. Stennes decided that action was needed to make a statement. The SA then stormed the Gau office on the Hedemannstrasse, injuring the SS men and wrecking the premises. In September 1930, as a consequence of the Stennes Revolt in Berlin, Hitler assumed supreme command of the SA as its new Oberster SAFührer. The SA cheered and were delighted that their leader was finally giving them the recognition they felt they deserved. He sent a personal request to Röhm, asking him to return to serve as the SA's chief of staff. Röhm accepted this offer and began his new assignment on 5 January 1931. Röhm established new Gruppen which had no regional Nazi Party oversight. Each Gruppe extended over several regions and was commanded by a SA Gruppenführer who answered only to Röhm or Hitler. Many of these stormtroopers believed in the socialist promise of National Socialism and expected the Nazi regime to take more radical economic action, such as breaking up the vast landed estates of the aristocracy once they obtained national power.
Adolf Hitler was appointed Chancellor of Germany on 30 January 1933 by Paul von Hindenburg. Göring (the number two man in the Nazi Party) was named as Minister Without Portfolio, Minister of the Interior for Prussia, and Reich Commissioner of Aviation. The Reichstag fire occurred on the night of 27 February 1933. Göring was one of the first to arrive on the scene. (At the Nuremberg trials, General Franz Halder testified that Göring admitted responsibility for starting the fire.) The Nazis took advantage of the fire to advance their own political aims. The Reichstag Fire Decree, passed the next day on Hitler's urging, suspended basic rights and allowed detention without trial. Göring demanded that the detainees should be shot, but Rudolf Diels, head of the Prussian political police, ignored the order. After only two months in office the Reichstag body passed the Enabling Act on 24 March 1933 giving the Reich Chancellor full legislative powers for a period of four years – the Chancellor could introduce any law without consulting Parliament. Rudolf Hess was named Deputy Führer of the NSDAP on 21 April 1933. Hitler's leadership style involved giving contradictory orders to his subordinates, while placing them into positions where their duties and responsibilities overlapped. In this way, Hitler fostered distrust, competition, and infighting among his subordinates to consolidate and maximise his own power
After Hitler and the Nazis obtained national power, the SA became increasingly eager for power itself. The SA leaders argued that the Nazi revolution had not ended when Hitler achieved power, but rather needed to implement socialism in Germany (see ^{*}Strasserism). The SA numbered over three million men and many saw themselves as a replacement for the "antiquated" Reichswehr. Röhm's ideal was to absorb the army (then limited by law to no more than 100,000 men) into the SA, which would be a new "people's army". This deeply offended and alarmed the army, and threatened Hitler's goal of coopting the Reichswehr. The SA's increasing power and ambitions also posed a threat to the other Nazi leaders.
SS and Gestapo
Originally an adjunct to the SA, the Schutzstaffel (SS), or protection squad, was placed under the control of Heinrich Himmler in part to restrict the power of the SA and their leaders. The younger SS had evolved to be more than a bodyguard unit for Hitler and showed itself better suited to carry out Hitler's policies, including those of a criminal nature. Over time the SS became answerable only to Hitler, a development typical of the organizational structure of the entire Nazi regime, where legal norms were replaced by actions undertaken under the ^{*}Führerprinzip (leader principle), where Hitler's will was considered to be above the law.^{[58]}
As Interior Minister of Prussia Göring had command of the largest police force in Germany. Göring detached the political and intelligence sections from the police and filled their ranks with Nazis. On 26 April 1933, Göring merged the two units as the Geheime Staatspolizei (Secret State Police), which was abbreviated by a post office clerk and became known as the "Gestapo". The first commander of the Gestapo was Rudolf Diels.
On 5 March 1933, yet another Reichstag election took place, the last to be held before the defeat of the Nazis. It was not the landslide expected by the party leadership. Goebbels finally received Hitler's appointment to the cabinet, officially becoming head of the newly created Reich Ministry of Public Enlightenment and Propaganda on 14 March 1933.
Concerned that Diels was not ruthless enough to effectively counteract the power of the Sturmabteilung (SA), Göring handed over control of the Gestapo to Himmler on 20 April 1934. Himmler named Reinhard Heydrich (whom Hitler called "the man with the iron heart") to head the Gestapo on 22 April 1934. Himmler asked Heydrich to assemble a dossier on Röhm. Heydrich manufactured evidence that suggested that Röhm had been paid 12 million marks by French agents to overthrow Hitler. Hitler was also concerned that Röhm and the SA had the power to remove him as leader. Göring and Himmler played on this fear by constantly feeding him with new information on Röhm's proposed coup. A masterstroke was to claim that Gregor Strasser, whom Hitler hated, was part of the planned conspiracy against him. With this news Hitler ordered all the SA leaders to attend a meeting in the Hanselbauer Hotel in Bad Wiessee.
On 30 June 1934, Hitler, accompanied by SS units, arrived at Bad Wiessee, where he personally placed Röhm and other highranking SA leaders under arrest. (See ^{*}Night of the Long Knives). The homosexuality of Röhm and other SA leaders was made public to add "shock value", even though the sexuality of Röhm and other named SA leaders had been known by Hitler and other Nazi leaders for years. Arriving back at party headquarters in Munich, Hitler addressed the assembled crowd. Consumed with rage, Hitler denounced "the worst treachery in world history."
War Is a Racket
War Is a Racket 

From Wikipedia:War Is a Racket War Is a Racket is a speech and a 1935 short book, by Smedley D. Butler, a retired United States Marine Corps Major General and twotime Medal of Honor recipient. WAR is a racket. It always has been. 
Einsatzgruppen
Einsatzgruppen  

From Wikipedia:Reinhard Heydrich, Wikipedia:Einsatzgruppen, Wikipedia:Final Solution With the SA out of the way, Heydrich began building the Gestapo into an instrument of fear. The Gestapo had the authority to arrest citizens on the suspicion that they might commit a crime, and even the definition of a crime was at their discretion. The Gestapo Law, passed on 10 Feb 1936, gave police the right to act extralegally. (In other words, it was legal for the Gestapo to break of the law.) This led to the sweeping use of Schutzhaft—"protective custody", a euphemism for the power to imprison people without judicial proceedings. The courts were not allowed to investigate or interfere. On 7 March 1936, Adolf Hitler took a massive gamble by sending 30,000 troops into the Rhineland. This was significant because the remilitarization of the Rhineland violated the terms of the Treaty of Versailles. The British decided that the Germans had the right to "enter their own backyard", and no action was taken. On the morning of 12 March 1938, the 8th Army of the German Wehrmacht crossed the border into Austria in what is known as the Anschluss ('joining'). The Einsatzgruppen (specialops units) had its origins in the ad hoc Einsatzkommando formed by Heydrich to secure government buildings and documents following the Anschluss.
In response to Adolf Hitler's plan to invade Poland on 1 September 1939, Heydrich reformed the Einsatzgruppen to travel in the wake of the German armies. Membership at this point was drawn from the SS, the SD, the police, and the Gestapo. From September to December 1939 the Einsatzgruppen and others took part in Action T4, a programme of systematic murder undertaken by the Nazi regime of persons with physical and mental disabilities and patients of psychiatric hospitals. Then following a HitlerHimmler directive, the Einsatzgruppen were reformed in anticipation of the invasion of the Soviet Union (Operation Barbarossa). The invasion was set for 15 May 1941, though it was delayed for over a month. The Commissar Order was an order issued by the German High Command (OKW) on 6 June 1941. It instructed the Wehrmacht that any Soviet political commissar identified among captured troops be summarily executed. The Axis invasion of the Soviet Union, started at 03:15 on Sunday, 22 June 1941 during a waning crescent moon. On 2 July 1941 Heydrich issued an order to his Einsatzkommandos for the onthespot execution of all Bolsheviks, interpreted by the SS to mean all Jews. One of the first indiscriminate massacres of men, women, and children in Reichskommissariat Ukraine took the lives of over 4,000 Polish Jews in occupied Łuck on 2–4 July 1941, murdered by Einsatzkommando 4a assisted by the Ukrainian People's Militia. On the orders of Himmler, forwarded to Odilo Globocnik soon after his visit to Lublin on 17–20 July 1941 concentration camp Lublin (Majdanek) was established in October 1941. On 13 October 1941, SS Leader Odilo Globocnik received an oral order from Himmler – anticipating the fall of Moscow – to start immediate construction work on the first killing centre at Bełżec. In October 1941, Herbert Lange chose Chełmno on the Ner for an extermination centre, because of the estate, with a large manor house similar to Sonnenstein. The Generalplan Ost (General Plan for the East) called for deporting the population of occupied Eastern Europe and the Soviet Union to Siberia, for use as slave labour. The initial plan was to implement Generalplan Ost after the conquest of the Soviet Union. However, with the entry of the United States into the war in December 1941 and the German failure in the Battle of Moscow, Hitler decided that the Jews of Europe were to be exterminated immediately rather than after the war, which now had no end in sight. Chełmno extermination camp was the first of the Nazi German extermination camps. It operated from December 8, 1941. In January 1942, during a secret meeting of German leaders chaired by Reinhard Heydrich, Operation Reinhard was drafted. Within months, three topsecret camps (at Bełżec, Sobibór, and Treblinka) were built. Under Eichmann's supervision, largescale deportations began almost immediately. The extermination camps of Operation Reinhard kept no prisoners. To hide the evidence of this war crime, all bodies were burned in open air pits. In the second phase of annihilation, the Jewish inhabitants of central, western, and southeastern Europe were transported by Holocaust trains to camps with newlybuilt gassing facilities. Germany invaded Hungaria (land of the Huns) on 19 March 1944. Berlin fell in May 1945. 
Highly recommend: War and Peace by Tolstoy
Militaryindustrial complex
From Wikipedia:Chance for Peace speech:
The Chance for Peace speech, also known as the Cross of Iron speech, was an address given by U.S. President Dwight D. Eisenhower on April 16, 1953, shortly after the death of Soviet dictator Joseph Stalin.
Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is two electric power plants, each serving a town of 60,000 population. It is two fine, fully equipped hospitals. It is some fifty miles of concrete pavement. We pay for a single fighter with a halfmillion bushels of wheat. We pay for a single destroyer with new homes that could have housed more than 8,000 people. . . . This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron
Eisenhower expressed regret for having set the precedent of large peacetime military expenditures.
From: _The White House Years 195661_ by Dwight D. Eisenhower:
During the years of my Presidency, and especially the latter years, I began to feel more and more uneasiness about the effect on the nation of tremendous peacetime military expenditures....
But in mid1953, after the end of the Korean War, I determined that we would not again become so weak militarily as to encourage aggression. This decision demanded a military budget that would establish, by its very size, a peacetime precedent.
...
The makers of the expensive munitions of war, to be sure, like the profits they receive...Each community in which a manufacturing plant or a military installation is located profits from the money spent and the jobs created in the area...All of these forces, and more, tend, therefore, to override the convictions of responsible officials who are determined to have a defense structure of adequate size but are equally determined that it shall not grow beyond that level. In the long run, the combinations of pressures for growth can create an almost overpowering influence. Unjustified military spending is nothing more than a distorted use of the nation's resources.
From Wikipedia:Eisenhower's farewell address
Despite being a politician with a military background and the only general to be elected president in the 20th century, Eisenhower famously warned the nation with regards to the corrupting influence of what he describes as the "militaryindustrial complex".
We face a hostile ideology global in scope, atheistic in character, ruthless in purpose, and insidious in method.Crises there will continue to be. In meeting them, whether foreign or domestic, great or small, there is a recurring temptation to feel that some spectacular and costly action could become the miraculous solution.
But each proposal must be weighed in light of a broader consideration; the need to maintain balance in and among national programs.
Until the latest of our world conflicts, the United States had no armaments industry. American makers of plowshares could, with time and as required, make swords as well. But we can no longer risk emergency improvisation of national defense. We have been compelled to create a permanent armaments industry of vast proportions. Added to this, three and a half million men and women are directly engaged in the defense establishment. We annually spend on military security alone more than the net income of all United States corporations.
Now this conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence—economic, political, even spiritual—is felt in every city, every Statehouse, every office of the Federal government. We recognize the imperative need for this development. Yet, we must not fail to comprehend its grave implications. Our toil, resources, and livelihood are all involved. So is the very structure of our society.
In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the militaryindustrial complex. The potential for the disastrous rise of misplaced power exists and will persist. We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.
He also expressed his concomitant concern for corruption of the scientific process as part of this centralization of funding in the Federal government:
Akin to, and largely responsible for the sweeping changes in our industrialmilitary posture, has been the technological revolution during recent decades.In this revolution, research has become central, it also becomes more formalized, complex, and costly. A steadily increasing share is conducted for, by, or at the direction of, the Federal government.
...
The prospect of domination of the nation's scholars by Federal employment, project allocation, and the power of money is ever present and is gravely to be regarded.
Yet in holding scientific discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientifictechnological elite.
In 2011, the United States spent more (in absolute numbers) on its military than the next 13 nations combined.
Real wages
Real wages increased with productivity until Reagan introduced supply side economics.
The President and the Press
From Wikisource:The President and the Press
The President and the Press: Address before the American Newspaper Publishers Association (1961) by John F. Kennedy.
Delivered in WaldorfAstoria Hotel, New York City, April 27, 1961.
I want to talk about our common responsibilities in the face of a common danger.I refer, first, to the need for a far greater public information; and, second, to the need for far greater official secrecy.
In time of "clear and present danger", the courts have held that even the privileged rights of the First Amendment must yield to the public's need for national security.
If you are awaiting a finding of "clear and present danger", then I can only say that the danger has never been more clear and its presence has never been more imminent.
For we are opposed around the world by a monolithic and ruthless conspiracy that relies primarily on covert means for expanding its sphere of influence – on infiltration instead of invasion, on subversion instead of elections, on intimidation instead of free choice, on guerrillas by night instead of armies by day. It is a system which has conscripted vast human and material resources into the building of a tightly knit, highly efficient machine that combines military, diplomatic, intelligence, economic, scientific and political operations.
Its preparations are concealed, not published. Its mistakes are buried, not headlined. Its dissenters are silenced, not praised. No expenditure is questioned, no rumor is printed, no secret is revealed. It conducts the Cold War, in short, with a wartime discipline no democracy would ever hope or wish to match.
I have no easy answer to the dilemma that I have posed, and would not seek to impose it if I had one.
...our press was protected by the First Amendment...not primarily to amuse and entertain, not to emphasize the trivial and the sentimental, not to simply "give the public what it wants" – but to inform, to arouse, to reflect, to state our dangers and our opportunities, to indicate our crises and our choices, to lead, mold, educate and sometimes even anger public opinion.
Counterculture
Counterculture 

From Wikipedia:Beat Generation The Beat Generation was a literary movement started by a group of authors whose work explored and influenced American culture and politics in the postWorld War II era. The bulk of their work was published and popularized throughout the 1950s. Central elements of Beat culture are rejection of standard narrative values, spiritual quest, exploration of American and Eastern religions, rejection of materialism, explicit portrayals of the human condition, experimentation with psychedelic drugs, and sexual liberation and exploration. Allen Ginsberg's Howl (1956), William S. Burroughs's Naked Lunch (1959) and Jack Kerouac's On the Road (1957) are among the best known examples of Beat literature. From Wikipedia:History of the hippie movement The Beat Generation, especially those associated with the San Francisco Renaissance, gradually gave way to the 1960s era counterculture, accompanied by a shift in terminology from "beatnik" to "freak" and "hippie." The hippie subculture began its development as a youth movement in the United States during the early 1960s and then developed around the world. Many of the original Beats remained active participants, notably Allen Ginsberg, who became a fixture of the antiwar movement. On the other hand, Jack Kerouac broke with Ginsberg and criticized the 1960s protest movements as an "excuse for spitefulness." From Wikipedia:Counterculture of the 1960s The era essentially commenced in earnest with the assassination of John F. Kennedy in November 1963. After the January 14, 1967 Human BeIn in San Francisco organized by artist Michael Bowen, the media's attention on culture was fully activated. In 1967 Scott McKenzie's rendition of the song "San Francisco (Be Sure to Wear Flowers in Your Hair)" brought as many as 100,000 young people from all over the world to celebrate San Francisco's "Summer of Love." Upon his release from prison on March 21, 1967, Manson established himself as a guru in San Francisco's HaightAshbury district, which during 1967's "Summer of Love" was emerging as the signature hippie locale. He strongly implied that he was Christ; he often told a story envisioning himself on the cross with the nails in his feet and hands. He began using the alias "Charles Willis Manson." He often said it very slowly ("Charles' Will Is Man's Son") — implying that his will was the same as that of the Son of Man. Martin Luther King Jr.'s assassination took place on April 4, 1968. For some time, Manson had been saying that blacks would soon rise up in rebellion in America's cities. Manson explained that this had also been predicted by the Beatles. The White Album songs (released on 22 November 1968), he declared, told it all in code. In fact, he maintained (or would soon maintain), the album was directed at the Family, an elect group that was being instructed to preserve the worthy from the impending disaster. In early January 1969, the Family moved to a canaryyellow home in Canoga Park, not far from the Spahn Ranch. Because this locale would allow the group to remain "submerged beneath the awareness of the outside world", Manson called it the Yellow Submarine, another Beatles reference. There, Family members prepared for the impending apocalypse, which Manson had termed "Helter Skelter", after the song of that name. Manson had preached to his Family would happen in the summer of 1969.
Gary Hinman supplied Beausoleil with a batch of bad mescaline that Beausoleil in turn sold to the Straight Satans motorcycle gang. When the bikers demanded their money back, Manson ordered Beausoleil to Hinman's residence to get the money. Hinman refused to pay. Manson arrived and proceeded to slice off a part of Hinman's ear with a sword. Atkins and Brunner stitched it up with dental floss afterwards. Manson then ordered Beausoleil to kill Hinman and told him to make it look as if the crime had been committed by black revolutionaries. Beausoleil stabbed Hinman to death on July 27, 1969. They gained national notoriety after the murder of actress Sharon Tate and four others on August 9, 1969 by Tex Watson and three other members of the Family, acting under the instructions of Charles Manson. According to Bobby Beausoleil, it was actually enacted to convince police that the killers of Gary Hinman were in fact still at large. Woodstock was a music festival on a dairy farm in Bethel, New York from August 15–18, 1969 which attracted an audience of more than 400,000. The Altamont Speedway Free Festival was a countercultureera rock concert December 6, 1969 in northern California. The event is best known for considerable violence, including the stabbing death of Meredith Hunter, two accidental deaths caused by a hitandrun car accident, and one accidental death by LSDinduced drowning in an irrigation canal. From Wikipedia:Counterculture of the 1960s The counterculture became absorbed into the popular culture with the termination of US combat military involvement in Southeast Asia and the end of the draft in 1973, and ultimately with the resignation of President Richard Nixon in August 1974. 
Internet trolls
In Internet terminology a troll is someone who comes into an established community such as an online discussion forum and posts inflammatory, rude, repetitive or offensive messages as well as top post flooding and impersonating others  designed intentionally to annoy or antagonize the existing members or disrupt the flow of discussion. A troll's main goal is to arouse anger and frustration or otherwise shock and offend the message board's other participants, and will write whatever it takes to achieve this end.
One popular trolling strategy is the practice of Winning by Losing. While the victim is trying to put forward solid and convincing facts to prove his position, the troll's only goal is to infuriate its prey. The troll takes (what it knows to be) a badly flawed, wholly illogical argument, and then vigorously defends it while mocking and insulting its prey. The troll looks like a complete fool, but this is all part of the plan. The victim becomes noticeably angry by trying to repeatedly explain the flaws of the troll's argument. Provoking this anger was the troll's one and only goal from the very beginning."
One particularly effective way of provoking anger, and an often used trolling strategy, is the strategy of turning the tables. The troll tries to guess what the other participants are thinking about the troll (for example, "you are a hypocrite") then the troll will accuse the other participants of that.
Trolls want to turn every conversation into a trial with themselves as selfappointed judge jury and executioner.
Experienced participants in online forums know that the most effective way to discourage a troll is usually to ignore him or her, because responding encourages a true troll to continue disruptive posts — hence the oftenseen warning "Please do not feed the troll".
Advanced mathematics
Search Math wiki
See also
External links
 MIT open courseware
 Cheat sheets
 http://mathinsight.org
 https://math.stackexchange.com
 https://www.eng.famu.fsu.edu/~dommelen/quantum/style_a/IV._Supplementary_Informati.html
 http://www.sosmath.com
 https://webhome.phy.duke.edu/~rgb/Class/intro_math_review/intro_math_review/node1.html
 Wikiversity:Mathematics
 w:c:4chanscience:Mathematics
References
 ↑ Wikipedia:Generalization
 ↑ Wikipedia:Division algebra
 ↑ Wikipedia:Cartesian product
 ↑ Wikipedia:Tangent bundle
 ↑ Wikipedia:Lie group
 ↑ Wikipedia:Sesquilinear form
 ↑ Wikipedia:Outer product
 ↑ Wikipedia:Tensor (intrinsic definition)
 ↑ Wikipedia:Tensor
 ↑ Wikipedia:Special unitary group
 ↑ Lawson, H. Blaine; Michelsohn, MarieLouise (1989). Spin Geometry. Princeton University Press. ISBN 9780691085425 page 14
 ↑ Friedrich, Thomas (2000), Dirac Operators in Riemannian Geometry, American Mathematical Society^{w}, ISBN 9780821820551 page 15
 ↑ "Pauli matrices". Planetmath website. 28 March 2008. http://planetmath.org/PauliMatrices. Retrieved 28 May 2013.
 ↑ The Minkowski inner product is not an ^{*}inner product, since it is not ^{*}positivedefinite, i.e. the ^{*}quadratic form η(v, v) need not be positive for nonzero v. The positivedefinite condition has been replaced by the weaker condition of nondegeneracy. The bilinear form is said to be indefinite.
 ↑ The matrices in this basis, provided below, are the similarity transforms of the Dirac basis matrices of the previous paragraph, $ U^\dagger \gamma_D^\mu U $, where $ U = \frac{1}{\sqrt{2}}\left(1  \gamma^5 \gamma^0\right) = \frac{1}{\sqrt{2}}\begin{pmatrix} I & I \\ I & I \end{pmatrix} $.
 ↑ Wikipedia:Rotor (mathematics)
 ↑ Wikipedia:Spinor#Three_dimensions
 ↑ Wikipedia:Spinor
 ↑ ^{19.0} ^{19.1} Cartan, Élie (1981) [1938], The Theory of Spinors, New York: Dover Publications, ISBN 9780486640709, MR 631850, https://books.google.com/books?isbn=0486640701
 ↑ Roger Penrose (2005). The road to reality: a complete guide to the laws of our universe. Knopf. pp. 203–206.
 ↑ E. Meinrenken (2013), "The spin representation", Clifford Algebras and Lie Theory, Ergebnisse der Mathematik undihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics, 58, SpringerVerlag, doi:10.1007/9783642362163_3
 ↑ S.H. Dong (2011), "Chapter 2, Special Orthogonal Group SO(N)", Wave Equations in Higher Dimensions, Springer, pp. 13–38
 ↑ Andrew Marx, Shortcut Algebra I: A Quick and Easy Way to Increase Your Algebra I Knowledge and Test Scores, Publisher Kaplan Publishing, 2007, Template:ISBN, 9781419552885, 288 pages, page 51
 ↑ Wikipedia:Multiplicity (mathematics)
 ↑ Wikipedia:Partial fraction decomposition
 ↑ Wikipedia:Basic hypergeometric series
 ↑ Wikipedia:qanalog
 ↑ ^{28.0} ^{28.1}
e^{x} = y = dy/dx
dx = dy/y = 1/y * dy
∫ (1/y)dy = ∫ dx = x = ln(y)
 ↑ Wikipedia:Product rule
 ↑ Wikipedia:Monotonic function
 ↑ Wikipedia:Generalized Fourier series
 ↑ Wikipedia:Spherical harmonics
 ↑ Wikipedia:Inverse Laplace transform
 ↑ http://mathworld.wolfram.com/PoissonKernel.html
 ↑ Wikipedia:Convolution theorem
 ↑ Wikipedia:RLC circuit
 ↑ Wikipedia:Total derivative
 ↑ Wikipedia:Residue (complex analysis)
 ↑ Wikipedia:Potential theory
 ↑ Wikipedia:Harmonic conjugate
 ↑ Wikipedia:Calculus of variations
 ↑ Wikipedia:Cover (topology)
 ↑ Joshi p. 323
 ↑ Wikipedia:Permutation
 ↑ Wikipedia:derangement
 ↑ Wikipedia:rencontres numbers
 ↑ Wikipedia:Central limit theorem
 ↑ Bland, J.M.; Altman, D.G. (1996). "Statistics notes: measurement error". BMJ 312 (7047): 1654. doi:10.1136/bmj.312.7047.1654. PMC 2351401. PMID 8664723. //www.ncbi.nlm.nih.gov/pmc/articles/PMC2351401/.
 ↑ Wikipedia:standard deviation
 ↑ Wikipedia:Hypergeometric distribution
 ↑ Wikipedia:Tit for tat
 ↑ Wikipedia:Atmospheric river
 ↑ A Psychological Analysis of Adolph Hitler. His Life and Legend by Walter C. Langer. Office of Strategic Services (OSS) Washington, D.C. With the collaboration of Prof. Henry A. Murr, Harvard Psychological Clinic, Dr. Ernst Kris, New School for Social Research, Dr. Bertram D. Lawin, New York Psychoanalytic Institute. p. 219 (Nizkor project)
 ↑ Dr. Langer's work was published after the war as The Mind of Adolf Hitler, the wartime report having remained classified for over twenty years.
 ↑ Wikipedia:Slavs
 ↑ Wikipedia:Muslim conquest of Persia
 ↑ Wikipedia:First Bulgarian Empire and Wikipedia:Bulgars
 ↑ Wikipedia:Schutzstaffel