(2006-05-07) Etymology
Vectors were so named because they "carry" the distance from the origin.
In medical and other contexts,
"vector" is synonymous with "carrier".
The etymology is that of "vehicle": The latin verb
vehere means "to transport".
In elementary geometry, a vector is the
difference between two points in space;
it's what has to be traveled to go from a given origin to a destination.
Etymologically, such a thing was perceived as "carrying"
the distance between two points
(the radius from a fixed origin to a point).
The term vector started out its mathematical life as part of
the French locution "rayon vecteur" (radius vector).
The whole expression is still used to identify a point in
ordinary (Euclidean) space, as seen from a fixed origin.
As presented next, the term vector more
generally denotes an element of a linear space (vector space)
of unspecified dimensionality (possibly infinite dimension)
over any scalar field (not just the real numbers).
(2006-03-28) Vector Space over a Field K
(called the ground field).
Vectors can be added, subtracted or scaled.
The scalars form a field.
The vocabulary is standard:
An element of the fieldK
is called a scalar.
Elements of the vector space are called vectors.
By definition, a vector spaceE is a set with a well-defined
internal addition (the sum U+V of two vectors is a vector)
and a well-defined external multiplication (i.e., for a scalar x
and a vector U, the scaled vector x U
is a vector) with the following properties:
(E, + ) is an Abelian group.
This is to say that the addition of vectors is an associative and commutative
operation and that subtraction is defined as well
(i.e., there's a zero vector,
neutral for addition, and every vector has an
opposite which yieldszero when added to it).
Scaling is compatible with arithmetic on the fieldK:
"xÎK,
"yÎK,
"UÎE,
"VÎE,
(x + y) U =
x (U + V) =
(x y) U =
1 U =
x U + y U
x U + x V
x (y U) U
(2010-04-23) Independent vectors.
Basis of a vector space.
The dimension is the largest possible number of independent vectors.
The modern definition of a vector space doesn't
involve the concept of dimension which had a towering
presence in the historical examples of vector spaces taken from
Euclidean geometry: A line has dimension 1, a plane has dimension 2,
"space" has dimension 3, etc.
The concept of dimension is best retrieved by introducing two complementary
notions pertaining to a set of vectors B.
B is said to consist of
independent vectors when all
nontrivial linear combinations
of the vectors of B are nonzero.
B is said to generateE
when every vector of the space E is a linear combination of vectors of B.
A linear combination
of vectors is a sum of finitely many of those
vectors, each multiplied by a scaling factor
(called a coefficient ).
A linear combination with
at least one nonzero coefficient
is said to be nontrivial.
If B generates Eand
consists of independent vectors, then it's
called a basis of E.
Note that the trivial space {0} has an empty basis
(the empty set does generate the space {0}
because an empty sum is zero).
To prove that all nontrivial vector spaces have a basis
requires the Axiom of Choice
(in fact, the existence of a basis for any nontrivial
vector space is equivalent to the Axiom of choice ).
HINT: Tuckey's lemma (which is equivalent to the
axiom of choice) says that there's always at least one maximal set in a family
"of finite character". That notion is patterned after the family of all the
linearly independent sets in a vector space (which, indeed, contains a set
if and only if it contains all the finite subset of that set).
A basis is a maximal set of linearly independent vectors.
Dimension theorem for vector spaces :
A not-so-obvious statement is that two bases of E
can always be put in one-to-one correspondence
with each other. Thus, all bases of E have the same
cardinal (finite or not).
That cardinal is called the dimension of the space E
and it is denoted dim (E)
(2010-06-21) Intersection and Sum of Subspaces.
A vector space included in another is called a subspace.
A subsetF of a vector space E
is a subspace of E if and only if it is
stable by addition and scaling (i.e., the sum of two vectors of F
is in F and so is any vector of F
multiplied into a scalar).
It's an easy exercise to show that the intersection
FÇG of two subspaces
F and G is a subspace of E.
So is the Minkowski sumF+G
(defined as the set of all sums
x+y of a vector x from F and a vector
y from G).
Two subspaces of E for which
FÇG = {0} and
F+G = E are said to be supplementary.
Their sum is then called a direct sum and the
following compact notation is used to state that fact:
E = F Å G
In the case of finitely many dimensions, the following relation holds:
dim ( F Å G )
= dim (F) + dim (G)
The generalization to nontrivial intersections is Grassmann's Formula :
dim ( F + G ) = dim (F) + dim (G)
-
dim ( F Ç G )
A lesser-known version applies to spaces of finite codimensions:
codim ( F + G ) = codim (F) + codim (G)
-
codim ( F Ç G )
(2010-12-03) Linear maps. Isomorphic vector spaces.
Two spaces are isomorphic if there's a linear bijection between them.
A functionf
which maps a vector space E into another space F
over the same field K is said
to be linear if it respects addition and scaling:
" x,y Î K,
" U,V Î E,
f ( x U + y V ) =
x f ( U ) +
y f ( V )
If such a linear function f is
bijective, its inverse is also
a linear map and the vector spaces
E and F are said to be isomorphic.
E » F
In particular, two vector spaces which have the same finite
dimension over the same field are necessarily isomorphic.
(2010-12-03) Quotient E/H
of a vector space E by a subspace H :
The equivalence classes (or residues) modulo H can be called slices.
If H is a subspace of the vector space E,
an equivalence relation can be defined by calling two vectors equivalent when
their difference is in H.
An equivalence class is called a slice, and it can be expressed as
a Minkowski sum of the form x+H.
The space E is thus partitioned into slices parallel to H.
The set of all slices is denoted E/H
and is called the quotient of E by H.
It's clearly a vector space (scaling a slice or adding up two slices yields a slice).
When E/H has finite dimension
that dimension is called the codimension of H.
A linear subspace of codimension 1 is called
an hyperplane of E.
x+H denotes the set of all sums
x+h where h is an element of H.
E/H
is indeed the quotient of E
modulo the equivalence relation which defines as equivalent
two vectors whose difference is in H.
The canonical linear map which sends a vector
x of E to the slice x+H
is called the quotient map
of E onto E/H.
A vector space is always isomorphic to the
direct sum of any subspace H
and its quotient by that same subspace:
E »
H Å E/H
Use this with H = ker ( f )
to prove the following fundamental theorem:
(2010-12-03)
Fundamental theorem of linear algebra. Rank theorem.
A vector space is isomorphic to the
direct sum of the image and kernel
(French: noyau) of any linear function defined over it.
The image or range of a
linear functionf which maps a vector space E
to a vector space F is a subspace of F defined as follows:
im ( f ) = range ( f ) = f (E)
=
{ y Î F |
$
x Î E , f (x) = y }
The kernel (also called nullspace)
of f is the following subspace of E :
ker ( f ) =
null ( f ) =
{ x Î E |
f (x) = 0 }
The fundamental theorem of linear algebra states that
there's a subspace of E which is isomorphic to
f (E) and supplementary
to ker ( f ) in E.
This results holds for a finite or an infinite number of dimensions
and it's commonly expressed by the following isomorphism:
f (E) Å ker ( f )
» E
Proof :
This is a corollary of the above, since
f (E) and E / ker ( f )
are isomorphic because a bijective map between them is obtained
by associating uniquely f (x)
with the residue class x + ker ( f ).
Clearly, that association doesn't depend on the choice of x.
Restricted to vector spaces of finitely many dimensions,
the theorem amounts to the following
famous result (of great practical importance).
Rank theorem (or rank-nullity theorem) :
For any linear function f over a finite-dimensional space E, we have:
dim ( f (E) ) + dim ( ker ( f ) ) = dim ( E )
dim ( f (E) ) is called the rank of f.
The nullity of f is dim ( ker ( f ) ).
In the language of the matrices
normally associated with linear functions:
The rank and nullity
of a matrix add up to its number of columns.
The rank of a matrix A
is defined as the largest number of linearly independent columns
(or rows) in it.
Its nullity is the dimension of its nullspace (consisting, by definition, of
the column vectors x for which A x = 0).
(2007-11-06) Normed Vector Spaces
A normed vector space is a linear space endowed with a norm.
Vector spaces can be endowed with a function (called norm)
which associates to any vector V
a real number ||V||
(called the norm or the
length of V) such that the following
properties hold:
||V|| is positive for any nonzero vector V.
||lV|| =
|l| ||V||
|| U + V || ≤ || U || + || V ||
In this, l
is a scalar and
|l| denotes what's called a
valuation on the field of scalars
(a valuation is a special type of one-dimensional norm;
the valuation of a product is the product of the valuations of its factors).
Some examples of valuations are the absolute value
of real numbers, the modulus
of complex numbers and the
p-adic metric of p-adic numbers.
Let's insist:
The norm of a nonzero vector is always a positive real number,
even for vector spaces whose scalars aren't real numbers.
(2020-10-09) Non-singular bilinear form f on E×E v |® [ x |® f (x,v) ]
is an isomorphism from E to E*.
In this E* is understood to be the continuous
dual of E*.
In finitely many dimensions, a pseudo inner product is non-degenerate if and
only if its associated determinant is nonzero.
(2020-09-30) Complex Inner-Product Spaces
Linear space endowed with a positive-definite sesquilinear form.
Parallelogram identity :
In an inner-product space, the following identity holds which reduces to Pythagoras theorem when ||u+v|| = ||u-v||
||u + v||2 + ||u - v||2 =
2 ||u||2 + 2 ||v||2
Polarization Identity :
Conversely, a norm which verifies the above parallelogram identity on a complex
linear space is necessarily derived from a sesquilinear inner product, obtained from the norm alone through
the following polarization formula:
<u,v> = ¼ (
||u + v|| -
||u - v|| +
i ||u - i v|| -
i ||u + i v|| )
A Hilbert space is an inner-product space which is
complete with respect to the norm associated to
the defining inner-product (i.e., it's a
Banach space with respect to that norm).
The name Hilbert space itself was coined by
John von Neumann (1903-1957)
in honor of the pioneering work published by
David Hilbert (1862-1943).
on the Lebesgue sequence space2
(which is indeed a Hilbert space).
(2009-09-03) Two Flavors of Duality
Algebraic duality & topological duality.
In a vector spaceE,
a linear form is a linear function which maps every
vector of E to a scalar of the underlying
field K. The set of all linear forms is called the
algebraic dual of E.
The set of all continuous
linear forms is called the
[ topological ] dual of E.
With finitely many dimensions, the two concepts are identical
(i.e., every linear form is continuous).
Not so with infinitely many dimensions.
An element of the dual (a continuous linear form)
is often called a covector.
Unless otherwise specified, we shall use the unqualified term dual
to denote the topological dual. We shall denote it E*
(some authors use E* to denote the algebraic dual and E'
for the topological dual).
The bidualE** of E is the dual of E*.
It's also called second dual or double dual.
A canonicalinjective homomorphism
F exists which immerses E into E**
by defining F(v), for any element v of
E, as the linear form on E* which maps every
element f of E* to the
scalar f (v). That's to say:
F(v) ( f ) = f (v)
If the canonical homomorphism is
a bijection, then E
is said to be reflexive and it is
routinely identified with its bidualE**.
E = E**
If E has infinitely many dimensions,
its algebraic bidual is never
isomorphic to it.
That's the main reason why the notion of topological
duality is preferred. (Note that a
Hilbert space is always reflexive in the above sense,
even if it has infinitely many dimensions.)
Example of an algebraic dual :
If
E = ()
denotes the space consisting of all complex sequences
with only finitely many nonzero values, then the
algebraic dual of E
consists of all complex sequences without restrictions. In other words:
E' =
Indeed, an element f of E' is a linear
form over E which is uniquely determined by the unrestricted
sequence of scalars formed by the images of the elements in the
countable basis
(e0 ,
e1 ,
e2 ... ) of E.
E' is a Banach space, but
E is not (it isn't complete).
As a result, an absolutely convergent series need not
converge in E.
For example, the series of general
term en/ (n+1)2
doesn't converge in E, although it's absolutely
convergent (because the series formed by the norms
of its terms is a well-known convergent real series).
Representation theorems for [continuous] duals :
A representation theorem is a statement that
identifies in concrete terms some abstractly specified entity.
For example, the celebrated
Riesz representation theorem states that the [continuous] dual
of the Lebesgue spaceLp (an abstract
specification) is just isomorphic to the space
Lq where q is a simple function of p
(namely, 1/p+1/q = 1 ).
Lebesgue spaces are usually linear spaces with
uncountably many dimensions
(their elements are functions over a continuum like
or
).
However, the Lebesgue sequence spaces
described in the next section are simpler
(they have only countably many dimensions)
and can serve as a more accessible example.
(2012-09-19) Lebesgue sequence spaces
p
and
q
are duals of each other when 1/p + 1/q = 1
For p > 1, the linear space
p
is defined as the subspace of
consisting of all sequences for which the following series converges:
( || x || p ) p =
( || (x0 , x1 , x2 , x3 , ... ) ||p ) p =
Sn | xn |p
As the notation implies, ||.||p is a norm on
p
because of the following famous nontrivial inequality
(Minkowski's inequality)
which serves as the relevant triangular inequality :
|| x+y ||p ≤
|| x ||p +
|| y ||p
For the topology induced by that so-called "p-norm", the [topological] dual of
p is isomorphic to
q , where:
1/p + 1/q = 1
Thus,
p
is reflexive (i.e., isomorphic to its own bidual)
for any p > 1.
(2009-09-03) Tensorial Product and Tensors
E Ä F
is generated by tensor products.
Consider two vector spaces E and F over the
same field of scalars K.
For two covectorsf and g
(respectively belonging to E* and F*) we may consider
a particular [continuous] linear form denoted
f Ä g
and defined over the cartesian productE´F via the relation:
f Ä g (u,v)
=
f (u) g (v)
The binary operator Ä
thus defined from (E*)´(F*)
to (E´F)*
is called tensor product.
(Even when E = F, the operator
Ä is not commutative.)
Example of Use: The Dehn invariant (1901)
In the Euclidean plane, two simple polygonal loops which enlose the same
area are always decomposable
into each other. That's to say, with a finite number of
straigth cuts, we can obtain pieces of one shape which
are pairwise congruent to the pieces of the other
(William Wallace, 1807.
(Paul Gerwien, 1833.
(Farkas Bolyai, 1835.)
Hilbert's third problem
(the equidecomposability problem, 1898)
asked whether the same is true for any pair of polyhedra
having the same volume.
Surprisingly enough, that's not so because volume
is not the only invariant preserved by straight cuts in three dimensions.
The other invariant (there are only two) is now known as
Dehn's invariant, in honor of
Max Dehn (1878-1952)
the doctoral student of Hilbert who based his habilitation thesis
on its discovery (September 1901). Here's a description:
(2015-02-21) Graded vector spaces and supervectors.
Direct sum of vector spaces indexed by a monoid.
When the indexing monoid is the set of natural integers {0,1,2,3,4...}
or part thereof, the degree n of a vector is the
smallest integer such that the direct sum of the subfamily
indexed by {0,1 ... n} contains that vector.
(2007-04-30) [ Linear ] Algebras over a Field K An internal product among vectors turns a vector space into an algebra.
They're also called distributive algebras because of a mandatory property
they share with rings. Any ring is trivially
a 1-dimensionalassociative algebra over the
boolean fieldF2.
Any associative algebra is a ring.
Rings are always postulated to be associative. Algebras need not be.
An algebra is the structure obtained when
an internal binary multiplication is well-defined on the vector space E
(the product of two vectors is a vector) which is bilinear and
distributive over addition. That's to say:
"xÎK,
"UÎE,
"VÎE,
"WÎE,
x (UV) = U (V + W) =
(V + W) U =
(x U) V = U (x V) U V + U W V U + W U
The commutator is the following
bilinear function. If it's identically zero,
the algebra is said to be commutative.
[ U , V ]
=
(UV) - (VU)
The lesser-used anticommutator is the following
bilinear function. If it's identically zero,
the algebra is said to be anticommutative.
{ U , V }
=
(UV) + (VU)
The commutator is anticommutative. The anticommutator is commutative.
The
associator
is defined as the following trilinear function.
It measures how the internal muliplication fails to be associative.
[ U , V , W ]
=
U (VW) - (UV) W
If its internal product has a neutral element, the algebra is called unital :
$ 1 ,
" U ,
1U = U1 = U
By definition, a derivation in an algebra is a vectorial
endomorphism D (i.e, D is a linear operator)
which obeys the following relation:
D ( U V ) = D(U) V + U D(V)
One nontrivial example of a derivation of some historical importance is the
Dirac operator.
The derivations over an algebra form the Lie algebra of derivations,
where the product of two derivations is defined as their
commutator.
One important example of that is the Witt algebra,
introduced in 1909 by Elie Cartan
(1869-1951) and studied at length in the 1930s by
Ernst Witt (1911-1991).
The Witt algebra is the Lie algebra of the derivation on the
Laurent polynomials with complex coefficients
(which may be viewed as the polynomials of two complex variables X any Y when XY = 1, namely
[z,1/z] ).
When the associator defined above
is identically zero, the algebra is said to be associative
(many authors often use the word "algebra" to denote only
associative algebras,
including Clifford algebras).
In other words, associative algebras fulfill the additional requirement:
In a context where distributivity is a mandatory property of algebras, that locution merely denote algebras
which may or may not be associative:
A good example of this early usage was given in 1942, by a top expert:
That convention is made necessary by the fact that the unqualified word
algebra is very often used to denote only associative algebras.
This obeys the usual inclusive
terminology in mathematical discourse,
whereby the properties we specify do not preclude stronger ones.
Since all algebras are distributive, this specification is indeed a good
way to stress that nothing special is assumed about multiplication;
neither associativity nor any of the weaker properties presented below.
Unfortunately, Dick Schafer himself later recanted and
introduced a distinction between not associative and nonassociative
(no hyphen) with the latter not precuding associativity.
He explains that in the opening lines of his reference book on the subject,
thus endowed with a catchy title:
"Introduction to Nonassociative Algebras"
(1961,
AP 1966, 1994,
Dover 1995,
2017).
I beg to differ. Hyphenation is too fragile a distinction
and a group of experts simply can't redefine the hyphenated term
non-associative.
Ignore
Wikipedia's advice:
Refrain from calling non-associative (hyphenated) something
which may well be associative!
Therefore, unless the full associativity of multiplication is taken for granted,
I'm using only the following set of inclusive locutions:
Let's examine all those lesser types of algebras. strongest first:
Alternative Algebras :
In general, a multilinear function is said to be alternating
if its sign changes when the arguments undergo an
odd permutation. An algebra is said to be
alternative when the aforementioned
associator is alternating.
The alternativity condition is satisfied if and only if two of the following statements
hold (the third one then follows) :
"U
"VU (UV) = (UU) V
(Left alternativity.)
"U
"VU (VV) = (UV) V
(Right alternativity.)
"U
"VU (VU) = (UV) U
(Flexibility.)
Octonions are a non-associative example of such an
alternative algebra.
Power-Associative Algebras :
Power-associativity states that the n-th power
of any element is well-defined regardless of
the order in which multiplications are performed:
The number of ways to work out a product of n identical factors
is equal to the Catalan number C(2n,n)/(n+1):
1, 1, 2, 5, 14, 42, 132... (A000108)
Power-associativity means that, for any given n, all those ways yield
the same result. The following special case (n=3) is not sufficient:
"U ,
U (UU) = (UU) U
That just says that cubes are well-defined.
The accepted term for that is third-power associativity
(I call it cubic associativity or cube-associativity for short).
It's been put to good use at least once, in print:
By definition, a subalgebra is an algebra contained in another
(the operations of the subalgebra being restrictions of the operations in the whole algebra).
Any intersection
of subalgebras is a subalgebra. The subalgebra generated by a subset
is the intersection of all subalgebras containing it.
The above three types of subassociativity can be fully characterized in terms of the associativity
of the subalgebras generated by 1, 2 or 3 elements:
If all subalgebras generated by one element are associative,
then the whole algebra is power-associative.
If all subalgebras generated by two elements are associative,
then the whole algebra is alternative
(theorem of Artin).
If all subalgebras generated by three elements are associative,
then the whole algebra is associative too.
In particular, commutative or anticommutative products are flexible.
Flexibility is usually not considered a form of subassociativity because
it doesn't fit into the neat classification of the previous section.
Flexibility is preserved by the Cayley-Dickson construction.
Therefore, all hypercomplex multiplications are flexible
(Richard D. Schafer, 1954).
In particular, the multiplication of sedenions is flexible
(but not alternative).
In a flexible algebra, the right-power of an element is equal to the matching left-power, but that doesn't
make powers well-defined. A flexible algebra is cube-associative but
not necessarily power-asspciative. In particular, fourth powers need not be well-defined:
A (A A) = (A A) A = A3
A A3 = A3 A
may differ from A2 A2
Example of a two-dimensional flexible algebra which isn't power-associative :
×
A
B
A
B
B
B
B
A
The operator at left is flexible because it's commutative.
Yet, neither of the two possible fourth powers is well-defined:
(2015-02-14) Lie algebras over a Field K Anticommutative algebras obeying Jacobi's identity.
Hermann Weyl (1885-1955) named those structures
after the Norwegian mathematician Sophus Lie (1842-1899).
The basic internal multiplication
in a Lie algebra is a bilinear operator denoted by a square bracket
(called a Lie bracket ) which must be anticommutative
and obey the so-called Jacobi identity, namely:
[B,A] = - [A,B]
[A,[B,C]] +
[B,[C,A]] +
[C,[A,B]] = 0
Anticommutativity implies
[A,A] = 0
only in the absence of 2-torsion.
The cross-product gives ordinary 3D vectors a Lie algebra structure.
Representations of a Lie Algebra :
The bracket notation is compatible with the key example
appearing in quantum mechanics,
where the Lie bracket is obtained as the
commutator over an ordinary linear algebra of linear operators
with respect to the functional
composition of operators, namely:
[ U , V ] = U o V
- V o U
In quantum mechanics, the relevant linear operators are
Hermitian (they're called observables).
To make the Lie bracket an internal product among those Hermitian operators,
we have to multiply the right-hand-side of the above defining relation
into the imaginary unit i
(or any real multiple thereof). This slight complication is irrelevant
to the theoretical discussion of the general case and we'll ignore it here.
If the Lie bracket is so defined, the Jacobi identity is a simple theorem,
whose proof is straightforward; just sum up the following three equations:
[A,[B,C]] =
A o (BoC - CoB)
-
(BoC - CoB) o A
[B,[C,A]] =
B o (CoA - AoC)
-
(CoA - AoC) o B
[C,[A,B]] =
C o (AoB - BoA)
-
(AoB - BoA) o C
Linearity makes "o" distribute over "-",
so the sum of the right-hand-sides sonsists of 12 terms, where the 6 possible permutations of the operators
each appear twice with opposite signs. The whole sum is thus zero.
Conversely, an anticommutative algebra obeying the Lie identity is said to have
a representation in terms of linear operators if
it's isomorphic to the Lie algebra formed by those operators when
the bracket is defined as the commutator of operators.
Loosely speaking, the Lie algebra associated to a Lie group is
its tangent space at the origin (about the identity).
Lie(G) is formed by all the left-invariant vector-fields on G.
A vector field
X on a Lie Group G is said to be invariant under left translations when
"gÎG,
"hÎG,
(dlg)h (Xh) = Xgh
where lg is the left-translation within G (lg(x) = g x)
and dlg is its differential between tangent spaces.
Lie's third theorem
states that every real finite-dimensional Lie algebra is associated to some Lie group.
However, there exist infinite-dimensional Lie algebras not associated to any Lie group.
Adrien Douady (1935-2007)
pointed out the first example
one late evening after a Bourbaki meeting...
Douady's counter-example is known as Heisenberg's Lie algebra (non-exponential Lie algebra)
and arises naturally in quantum mechanics to describe
the motion of particles on a straight line by means of
three operators (X, Y and Z) acting on any square-integrable
function f of a real variable (which form an
infinite-dimensional space):
(X f ) (x) = x f (x)
(Y f ) (x) = -i ¶x f (x)
(Z f ) (x) = -i f (x)
This cannot be realized as the tangent space of a connected three-dimensional Lie group,
because the Lie algebra associated to any such group is either abelian or solvable,
and the non-exponential Lie algebra is neither.
Semisimple Lie Algebras & Semisimple Lie Groups :
A simple Lie algebra is a non-abelian Lie algebra without any
nonzero proper ideal.
A simple Lie group is a connected Lie group whose Lie algebra is simple.
A semisimple Lie algebra is a direct sum of simple Lie algebras.
A semisimple Lie group is a connected Lie group whose Lie algebra is semisimple.
Any Lie algebra L is the semidirect sum of its radicalR
(a maximal solvable ideal) and a semisimple algebra S.
(2023-04-08) Leibniz algebras (Blokh 1965, Loday, 1993)
They're Lie algebras if the bracket is alternative (i.e., [x,x] = 0).
Much of the literature about Leibniz algebras consist in pointing out that results
known for Lie algebras are also true in this more general context.
Originally, Leibniz algebras were called D-algebras.
That early name was coined in 1965 by
Alexander Mikhailovich Blok (1946-1986)
in a paper named "Cohomology in the algebra of exterior forms" (1965).
He switched to the name "Leibniz algebras" in a paper published in 1977 under the title
"The variety of Leibniz algebras".
Blok (often misspelled "Bloh")
received his doctorate in 1970 from
Leningrad State University
under Dmitry B. Fuchs (1939-)
for a thesis entitled "The cohomology of Lie superalgebras".
The "D" stood for derivation,
a name which identifies any operator D obeying the following
Leibniz law of ordinary derivatives:
D (x y) = x D(y) + D(x) y
If D is the operator corresponding to multiplication to the right by some element z,
that becomes the following substitute for associativity:
(x y) z = x (y z) + (x z) y
If brackets are used for multiplication, as is common here, that reads:
(Right) Leibniz Identity
[[x,y],z] = [[x,z],y] + [x,[y,z]]
Indeed, Blokh (1965) and Loday (1993) defined a Leibniz algebra as a vector space
(or a module) with a bilinear bracket verifying that identity.
In the anticommutative case, the Leibniz identity and Jacobi's identity are equivalent.
Thus a Lie algebra is just an anticommutative Leibniz algebra.
The Koszul duals of Leibniz algebras are called
Zinbiel
("Leibniz" backwards). That was a pseudonym used by Loday who created a fictitious character by that name as a joke.
(2015-02-19) Jordan algebras are commutative.
Turning any linear algebraA
into a commutative one A+.
Those structures were introduced in 1933 by Pascual Jordan (1902-1980).
They were named after him by A. Adrian Albert (1902-1980) in 1946.
(Richard Schafer, a former student of Albert, wouldn't
use that name, possibly because Jordan was a notorious Nazi).
Just like commutators turn linear operators into a Lie algebra,
a Jordan algebra is formed by using the anti-commutator or Jordan product :
UV = ½
( U o V + V o U )
Axiomatically, a Jordan algebra is a commutative algebra
( UV = VU )
obeying Jordan's identity :
(UV) (UU) =
U (V (UU)).
(2007-04-30) Clifford algebras over a Field K Unital associative algebras with a quadratic form.
Those structures are named after the British geometer and philosopher
William Clifford (1845-1879) who originated the concept in 1876.
The first description of Clifford algebras centered on
quadratic forms was given in 1945 by
a founder of the Bourbaki collaboration:
Claude Chevalley (1909-1984).
The grade of a product is the sum of the grades of its factors, modulo 4.
Products of gamma matrices :
a = g0
b = g1
c = g2
d = g3 (with e = abcd)
0
Grade 1
Grade 2 (bivectors)
Grade 3
4
I
a
b
c
d
ab
ac
ad
bc
bd
cd
ae
be
ce
de
e
a
I
ab
ac
ad
b
c
d
de
-ce
be
e
cd
-bd
bc
ae
b
-ab
-I
bc
bd
a
-de
ce
-c
-d
ae
-cd
-e
-ad
ac
be
c
-ac
-bc
-I
cd
de
a
-be
b
-ae
-d
bd
ad
-e
-ab
ce
d
-ad
-bd
-cd
-I
-ce
be
a
ae
b
c
-bc
-ac
ab
-e
de
ab
-b
-a
de
-ce
I
-bc
-bd
-ac
-ad
e
-be
-ae
-d
c
cd
ac
-c
-de
-a
be
bc
I
-cd
ab
-e
-ad
-ce
d
-ae
-b
-bd
ad
-d
ce
-be
-a
bd
cd
I
e
ab
ac
-de
-c
b
-ae
bc
bc
de
c
-b
ae
ac
-ab
e
-I
cd
-bd
-d
ce
-be
-a
-ad
bd
-ce
d
-ae
-b
ad
-e
-ab
-cd
-I
bc
c
de
a
-be
ac
cd
be
ae
d
-c
e
ad
-ac
bd
-bc
-I
-b
-a
de
-ce
-ab
ae
-e
-cd
bd
-bc
be
ce
de
-d
c
-b
I
ab
ac
ad
-a
be
cd
e
ad
-ac
ae
d
-c
-ce
-de
-a
-ab
-I
bc
bd
-b
ce
-bd
-ad
e
ab
-d
ae
b
be
a
-de
-ac
-bc
-I
cd
-c
de
bc
ac
-ab
e
c
-b
ae
-a
be
ce
-ad
-bd
-cd
-I
-d
e
-ae
-be
-ce
-de
cd
-bd
bc
-ad
ac
-ab
a
b
c
d
-I
In the above table, the commuting products are highlighted in blue.
The other off-diagonal products are anticommuting.
There are two opposite ways to associate the letter "a" to the time coordinate (ct).
Likewise, we can assign 3 of the 4 remaining letters (b,c,d,e) to 3 given orthogonal axes
in 120 nonoriented ways or 1920 oriented ones. Thus, a spacetime reference frame can
be labeled in 1920 equivalent ways.
Covariant Differential Operators
The simplest one was found by Dirac
in 1927. Its eigenvectors describes particles of spin ½ and mass m,
like the electron.
... and then, you could liearize the sum of four squres. Actually. you could even linearize the sums of five squares... Paul Dirac (Accademia Dei Lincei, Rome 1975-04-15).
(2015-02-23) Involutive Algebras
A special linear involution is singled out (adjunction or conjugation).
As the adjoint or conjugate
of an element U is usually denoted
U* such structures are also called *-algebras (star-algebras).
The following properties are postulated:
(2015-02-23) Von Neumann Algebras
Compact operators resemble ancient infinitesimals...
John von Neumann (1903-1957) introduced those structures
in 1929 (he called them simply rings of operators).
Von Neumann presented their basic theory in 1936 with the help of
Francis Murray
(1911-1996).
By definition, a factor is a Von Neumann algebra with a trivial
center
(which is to say that only the scalar multiples of identity commute with all the elements of the
algebra).
(2009-09-25) On multi-dimensional objects that are "not vectors"...
To a mathematician, the juxtaposition (or
cartesian product )
of several vector spaces over the same field K is always
a vector space over that field (as component-wise definitions
of addition and scaling satisfy the above axioms).
When physicists state that some particular juxtaposition of
quantities (possibly a single numerical quantity by itself) is
"not a scalar", "not a vector" or "not a tensor"
they mean that the thing lacks an unambiguous and intrinsic definition.
Typically, a flawed vectorial definition would actually depend on
the choice of a frame of reference for the physical universe.
For example, the derivative of a scalar with respect to the first
spatial coordinate is "not a scalar" (that quantity depends on
what spatial frame of reference is chosen).
Less trivially, the gradient of a scalar is a physical covector
(of which the above happens to be onecovariant coordinate). Indeed,
the definition of a gradient specifies the same object (in dual
space) for any choice of a physical coordinate basis.
Some physicists routinely introduce (especially in the context
of General Relativity)
vectors as "things that transform like elementary displacements" and
covectors as "things that transform like gradients".
Their students are thus expected to grasp a complicated notion
(coordinate transformations) before the stage is set.
Newbies will need several passes
through that intertwined logic before they "get it".
I'd rather introduce the mathematical notion of a vector first.
Having easily absorbed that straight notion, the student may then be asked to consider
whether a particular definition depends on a choice of coordinates.
For example, the linear coefficient of thermal expansionCTE) cannot be properly defined
as a scalar (except for isotropic substances)
it's a tensor.
On the other hand, the related cubic CTE is always a scalar
(which is equal to the trace of the aforementioned
CTE tensor).
(2007-08-21) Geometric Calculus of David Hestenes
Unifying some notations of mathematical physics...
Observing similitudes in distinct areas of mathematical physics and building on groundwork
from Grassmann
(1844)
and Clifford (1876)
David Hestenes (1933-)
has been advocating a denotational unification,
which has garnered quite a few enthusiastic followers.
The approach is called Geometric Algebra by its proponents.
The central objects are called multivectors.
Their coordinate-free manipulation goes by the name of
multivector calculus or geometric calculus,
a term which first appeared in the title of Hestenes' own doctoral dissertation
(1963).
That's unrelated to the abstract field of Algebraic Geometry
(which has been at the forefront of mainstream mathematical research for decades).