Under matrix multiplication, different shaped matrices (diagonal, upper/lower triangular and anti diagonal variants etc.) form various other shapes as a result. For example in LU decomposition, a lower and upper matrix form a square matrix. I attempt to search for a group and explore this concept a little.

We can draw a Cayley table under the matrix multiplication, however this is a non-commutative operation so we do not expect the table to be symmetric.

Let us define, \(s\) as a square positive matrix (all elements greater than zero). \(d\) and \(a\) as diagonal and anti diagonal positive matrices respectively, then \(nw,ne,sw\) and \(se\) (compass directions) as triangular positive matrices around the respective corner.

//(Examples for explicit)

\[\begin{array}{ c| c c c c c c c} & s & sw & ne & se & nw & d & a \\ \hline s & s & s & s & s & s & s & s \\ sw & s & sw & s & se & s & sw & se \\ ne & s & s & ne & s & nw & ne & nw \\ se & s & s & se & s & sw & se & sw \\ nw & s & nw & s & ne & s & nw & ne \\ d & s & sw & ne & se & nw & d & a \\ a & s & nw & se & ne & sw & a & d \\ \end{array}\]

It is clear from this table that the element \(s\) acts like \(0\) as when it is multiplied to anything it creates itself. Then the element \(d\) acts like \(1\) as when it is multiplied to anything it leaves it the same, (identity). In some sense \(a\) is then like \(-1\) as it switches some property of a given element, a left multiplication of \(a\) changes \(sX\) to \(nX\) and \(nX\) to \(sX\), where \(X\) is \(e\) or \(w\). a right multiplication of \(a\) changes \(Xw\) to \(Xe\) and \(Xe\) to \(Xw\), where \(X\) is \(n\) or \(s\).

With this in mind a left and right multiplication of some element with \(a\) will change both parts, for example, \((a)(se)(a)=(nw)\).

There cannot be an inverse for every operation when we insist that the matrices are positive, thus they cannot form a group.

Exploit symmetries, \[{\begin{bmatrix} a & b \\ c & d \end{bmatrix}} \to \Bigg[ \begin{matrix} b & \\ a & d \\ c & \end{matrix}\]

\[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae+bg & af+bh \\ ce+dg & cf+dh \end{bmatrix} \to \Bigg[ \begin{matrix} af+bh & \\ ae+bg & cf+dh \\ ce+dg & \end{matrix}\]

\[\Bigg[ \begin{matrix} b & \\ a & d \\ c & \end{matrix}^T = \Bigg[\begin{matrix} c & \\ a & d \\ b & \end{matrix}\]

With complex numbers \((a+ib),(c+id)\) could make \[(a+ib),(c+id) \to \begin{bmatrix} a & b \\ c & d \end{bmatrix}\]

Thus with an additional pair \((e+if),(g+ih)\) we have \[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae-bf & af+be \\ cg-dh & ch+dg \end{bmatrix}\]

Which represents a term for term product of two complex vectors, however carries the same construction as two by two matrix multiplication! One could ask the question, do there exist pairs of matrices which give the same written equation when the dot is interpreted as matrix multiplication or as the complex vector system? A trivial solution is all elements are zero, so at least one.

A non trivial solution would would be the set of equations \[ae-bf = ae+bg \\ af+be = af+bh \\ cg-dh = ce+dg \\ ch+dg = cf+dh\]

Which reduce to \[-f = g \\ e = h \\ cg-dh = ce+dg \\ ch+dg = cf+dh\]

However if \(c=0\) (the left hand matrix is upper triangular) we would have that \(-h = g , g = h\), which is only satisfied by \(g=h=0\)... Therefore from the top two equations, also \(f=e=0\), then \(a,b\) and \(d\) can take any value as the equations are independent of these values.

The operaration here is uncoupled so could make a strange change such as defining a new product which is \[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \# \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae-dh & af+dg \\ cg-bf & ch+be \end{bmatrix}\]

Term by term deliberatly picking 0 and 1\[{\begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix}}\#{\begin{bmatrix} b_{00} & b_{01} \\ b_{10} & b_{11} \end{bmatrix}} = {\begin{bmatrix} a_{00}b_{00}-a_{11}b_{11} & a_{00}b_{01}+a_{11}b_{10} \\ a_{10}b_{10}-a_{01}b_{01} & a_{10}b_{11}+a_{01}b_{00} \end{bmatrix}}\]

After a fair amount of thinking this has a general formula \[a\#b=M_{ij}=\sum_{k=0}^1 (-1)^{\delta^k_1\delta^j_0}a_{i \oplus k,k}b_{i \oplus k,j \oplus k}\] where \(\oplus\) is the logical XOR operation.

We have the following results \[\begin{bmatrix}1&0\\0&0\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}1&1\\0&0\end{bmatrix}\\ \begin{bmatrix}0&1\\0&0\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}0&0\\-1&1\end{bmatrix}\\ \begin{bmatrix}0&0\\1&0\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}0&0\\1&1\end{bmatrix}\\ \begin{bmatrix}0&0\\0&1\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}-1&1\\0&0\end{bmatrix}\\\] The relationship does not commute \[{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}1&0\\0&0\end{bmatrix}=\begin{bmatrix}1&0\\0&1\end{bmatrix}=I\\ {\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}0&1\\-1&0\end{bmatrix}=i\sigma_y\\ {\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}0&0\\1&0\end{bmatrix}=\begin{bmatrix}0&1\\1&0\end{bmatrix}=\sigma_x\\ {\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}0&0\\0&1\end{bmatrix}=\begin{bmatrix}-1&0\\0&1\end{bmatrix}=-\sigma_z\\\]

Which are an interesting basis.

The transform appears to follow the relationship \[1\#\begin{bmatrix}a&b\\c&d\end{b

## Share on Social Media