Notes on tensor/matrix notation

Matrix multiplication of tensors

Matrix multiplication is a formal operation over 2-dimensional tables of numbers. Some of the tensors can be represented as 2-dimensional matrices:

  • scalars: \(1\times 1\)

  • vector contravariant and covariant components: \(n\times 1\) and \(1\times m\)

  • linear operators and bilinear forms: \(n\times m\)

Nevermind the upper and lower indices, matrix multiplication is defined by

\begin{equation} (A\cdot B)_{ik}=\sum_{j}A_{ij}B_{jk}=\sum_{j}B_{jk}A_{ij}\nonumber \\ \end{equation}

as you see, the order is not important once you use index notation. Whether or not something is a matrix multiplication is defined by ability to colocate summed-over index in two objects (\(ij,jk\) above).

\(C_{ij}D_{kj}\) is not a matrix multiplication, but because of \(D_{kj}=(D^{T})_{jk}\), you can write \(C_{ij}D_{kj}=C_{ij}(D^{T})_{jk}=(C\cdot D^{T})_{ik}\)

You can write down in matrix notation only expressions containing components of tensors of rank at most 2.

Upper and lower indices

By position of the index we denote whether the object is a component of a vector or a function acting on vectors (such as the basis vector).

[Someone else is editing this]