Generating Operations

In some sense, the determinant is able to generate the features of various algebras we know and love, For example vectors and their relationships, we have \[\textbf{r}_1 \times \textbf{r}_2 = \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \end{vmatrix} \\ \\ \textbf{r}_1 \cdot (\textbf{r}_2 \times \textbf{r}_3)=\begin{vmatrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{vmatrix}\]

Where the second, scalar triple product is the volume of the paralelipiped made by the three vectors. These were taken from (Leach, Molecular Modelling 2nd Edition) although it is widly known. Generally these are just shorthands or mnenomics. The mathematical value is limited.

The difference of these two then appears to be the introduction of rows of \(i,j,k\) in the determinant and the removal of vector inputs. Above are a binary and ternary operation, experimenting with this concept in the same notation we can generate a unary operation and a 0-ary operation, i.e just an object. We have \[\hat{S}\textbf{r}_1 = \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \\ x_1 & y_1 & z_1 \end{vmatrix} = \begin{bmatrix} 0 & z_1 & -y_1 \\ -z_1 & 0 & x_1 \\ y_1 & -x_1 & 0 \end{bmatrix} \\ \\ \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \end{vmatrix}= \epsilon_{ijk}\]

The first operation takes a vector and generates a skew symmetric matrix. The third operation defines the Levi-Civita symbol, a rank \(3\) tensor which is essential to vector calculus and higher dimensional generalisations. A use of the first operation would be to define the electromagnetic tensor which looks like \[F_{\mu\nu}=\begin{bmatrix} 0 & E_x/c & E_y/c & E_z/c \\ -E_x/c & 0 & B_z & -B_y \\ -E_y/c & -B_z & 0 & B_x \\ -E_z/c & B_y & -B_x & 0 \end{bmatrix}\]

Here we can see the lower right 3x3 matrix would just be \(\hat{S}\textbf{B}\)...

We could go by the standard that a vector is a column vector untill transposed and then rewrite \(F\) as \[F_{\mu\nu}=\frac{1}{c}\begin{bmatrix} 0 & \textbf{E}^T \\ -\textbf{E} & c\hat{S}\textbf{B} \end{bmatrix}\]

and we have a reduced form.

Let us not forget at this stage the 2x2 determinants, these may constitute a basis of sorts in the 3x3 space. If we have the analogy \[\epsilon_{ij}=\begin{vmatrix} \textbf{i} & \textbf{j} \\ \textbf{i} & \textbf{j} \end{vmatrix}=\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}\]

we can see this makes up the symmetry of the \(\textbf{E}\) entries in the \(F\) tensor above. However, by a Laplace expansion we have a dimensional change from 3 to 2 which can be expressed by \[\epsilon_{ijk}=\begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \end{vmatrix} = \textbf{i}\begin{vmatrix} \textbf{j} & \textbf{k} \\ \textbf{j} & \textbf{k} \end{vmatrix}-\textbf{j}\begin{vmatrix} \textbf{i} & \textbf{k} \\ \textbf{i} & \textbf{k} \end{vmatrix}+\textbf{k}\begin{vmatrix} \textbf{i} & \textbf{j} \\ \textbf{i} & \textbf{j} \end{vmatrix}\]

But this means the rank 3 tensor which has depth slices ( the \(k^{th}\) matrix \(ij\) where \(k\) is depth, and \(i\) and \(j\) are still row and column respectively. We could say the \(k^{th}\) shelf in analogy.) \[\begin{bmatrix}0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0\end{bmatrix}_1 \begin{bmatrix}0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0\end{bmatrix}_2 \begin{bmatrix}0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0\end{bmatrix}_3\]

is equal to the above expansion. This allows us to literally spread out the two by two matrices according to thier entries. Although they all have the form \[\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}\]

They undergo individual row and column insertion operations such that \[\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \to\begin{bmatrix}0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0\end{bmatrix}_1 \\ -\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \to\begin{bmatrix}0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0\end{bmatrix}_2 \\ \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \to\begin{bmatrix}0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0\end{bmatrix}_3\]

Then in some sense the formula \[\epsilon_{ijk}= \textbf{i}\epsilon_{jk} -\textbf{j}\epsilon_{ik} + \textbf{k}\epsilon_{ij}\]

holds. A vector of matrices.

For our \(S\) operation we can swap a row in the initial matrix, however this just equates to minus the result. \[\hat{S}\textbf{r}_1 = \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \\ x_1 & y_1 & z_1 \end{vmatrix} = \begin{bmatrix} 0 & z_1 & -y_1 \\ -z_1 & 0 & x_1 \\ y_1 & -x_1 & 0 \end{bmatrix} \\ \\ \hat{S}'\textbf{r}_1 = \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ x_1 & y_1 & z_1 \\ \textbf{i} & \textbf{j} & \textbf{k} \end{vmatrix} = \begin{bmatrix} 0 & -z_1 & y_1 \\ z_1 & 0 & -x_1 \\ -y_1 & x_1 & 0 \end{bmatrix} \\ \\\]

Without this permutation, which could be expressed as a permutation matrix \(P\) multiplied onto the result of the \(S\) operator we can ask what is the value of \(\hat{S}(\textbf{r}_1)\hat{S}(\textbf{r}_2)\)? \[\hat{S}(\textbf{r}_1)\hat{S}(\textbf{r}_2)= \begin{bmatrix} 0 & z_1 & -y_1 \\ -z_1 & 0 & x_1 \\ y_1 & -x_1 & 0 \end{bmatrix} \begin{bmatrix} 0 & z_2 & -y_2 \\ -z_2 & 0 & x_2 \\ y_2 & -x_2 & 0 \end{bmatrix} = \begin{bmatrix} -z_1z_2-y_1y_2 & y_1x_2 & z_1x_2 \\ x_1y_2 & -z_1z_2-x_1x_2 & z_1y_2 \\ x_1z_2 & y_1z_2 & -y_1y_2-x_1x_2 \end{bmatrix}\]

And onto a vector \[\hat{S}(\textbf{r}_1)\textbf{r}_2= \begin{bmatrix} 0 & z_1 & -y_1 \\ -z_1 & 0 & x_1 \\ y_1 & -x_1 & 0 \end{bmatrix} \begin{bmatrix} x_2 \\ y_2 \\ z_2 \end{bmatrix} = \begin{bmatrix} z_1y_2-y_1z_2 \\ x_1z_2-z_1x_2 \\ y_1x_2-x_1y_2 \end{bmatrix} =-\textbf{r}_1 \times \textbf{r}_2\]

Cross Products

We have the usual formula for a cross product \[\textbf{a} \times \textbf{b} = \epsilon_{ijk}a_ib_j\hat{\textbf{e}}_k\]

with \(e\) as the unit vectors. However from our determinant representations we could try writing \[\begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix} = \sum_{i,j} \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \end{vmatrix} _{ijk} a_ib_k\hat{\textbf{e}}_k\]

We also have the curl \[\nabla \times \mathbf{F} = \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \partial_x & \partial_y & \partial_z \\ F_x & F_y & F_z \end{vmatrix} =\epsilon^{klm}\partial_lF_m\hat{\mathbf{e}}_k\]

One could seek to create another operator (that maps f(x,y,z)) to a vector of the form \[\hat{O}= \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \partial_x & \partial_y & \partial_z \\ \partial_x & \partial_y & \partial_z \end{vmatrix} =\textbf{i}(\partial_y\partial_z-\partial_z\partial_y)-...\]

But the laws of partial derivatives being commutative forbid this mapping to any vecotr other than (0,0,0)...

Then there is the skew-symmetric matrix of the form \[\hat{O}= \begin{vmatrix} \textbf{i} & \textbf{j} & \textbf{k} \\ \textbf{i} & \textbf{j} & \textbf{k} \\ \partial_x & \partial_y & \partial_z \end{vmatrix} = \begin{bmatrix} 0 & \partial_z & -\partial_y \\ -\partial_z & 0 & \partial_x \\ \partial_y & -\partial_x & 0 \end{bmatrix} \\ \begin{bmatrix} 0 & \partial_z & -\partial_y \\ -\partial_z & 0 & \partial_x \\ \partial_y & -\partial_x & 0 \end{bmatrix} \begin{bmatrix} F_x \\ F_y \\ F_z \end{bmatrix} = \begin{bmatrix} \partial_zF_y-\partial_yF_z \\ \partial_xF_z-\partial_zF_x \\ \partial_yF_x-\partial_xF_y \end{bmatrix} =-\nabla\times\mathbf{F}\]

This would make the curl of curl operator \[\nabla \times (\nabla \times)= \begin{bmatrix} -\partial_z^2-\partial_y^2 & \partial_y\partial_x & \partial_z\partial_x \\ \partial_x\partial_y & -\partial_z^2-\partial_x^2 & \partial_z\partial_y \\ \partial_x\partial_z & \partial_y\partial_z & -\partial_y^2-\partial_x^2 \end{bmatrix}\]

However this should be equal to \(\nabla(\nabla\cdot \mathbf{A}) - \nabla^2\mathbf{A}\) via an identity. We can tell that \[\nabla^2= \begin{bmatrix} \partial_x^2 + \partial_y^2 + \partial_z^2 & 0 & 0 \\ 0 & \partial_x^2 + \partial_y^2 + \partial_z^2 & 0 \\ 0 & 0 & \partial_x^2 + \partial_y^2 + \partial_z^2 \end{bmatrix}\]

Which would make \[\nabla(\nabla \cdot)= \begin{bmatrix} \partial_x^2 & \partial_y\partial_x & \partial_z\partial_x \\ \partial_x\partial_y & \partial_y^2 & \partial_z\partial_y \\ \partial_x\partial_z & \partial_y\partial_z & \partial_z^2 \end{bmatrix}\]

As the divergence operator maps a vecotr field to a scalar field it must be a row vector. As the divergence of any curl is zero it must result in 0 under the left operation. This gives a likely form \[\begin{bmatrix} \partial_x & \partial_y & \partial_z \end{bmatrix} \begin{bmatrix} 0 & -\partial_z & \partial_y \\ \partial_z & 0 & -\partial_x \\ -\partial_y & \partial_x & 0 \end{bmatrix} =\begin{bmatrix}0&0&0\end{bmatrix}\]

Knowing this we require the gradient operator to be a column vector such that when it left operates on the divergence row operator we may create the matrix form above. This gives the form \[\begin{bmatrix} \partial_x \\ \partial_y \\ \partial_z \end{bmatrix} \begin{bmatrix} \partial_x & \partial_y & \partial_z \end{bmatrix} = \begin{bmatrix} \partial_x^2 & \partial_y\partial_x & \partial_z\partial_x \\ \partial_x\partial_y & \partial_y^2 & \partial_z\partial_y \\ \partial_x\partial_z & \partial_y\partial_z & \partial_z^2 \end{bmatrix}\]