Complex

With complex numbers \((a+ib),(c+id)\) could make \[(a+ib),(c+id) \to \begin{bmatrix} a & b \\ c & d \end{bmatrix}\]

Thus with an additional pair \((e+if),(g+ih)\) we have \[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae-bf & af+be \\ cg-dh & ch+dg \end{bmatrix}\]

Which represents a term for term product of two complex vectors, however carries the same construction as two by two matrix multiplication! One could ask the question, do there exist pairs of matrices which give the same written equation when the dot is interpreted as matrix multiplication or as the complex vector system? A trivial solution is all elements are zero, so at least one.

A non trivial solution would would be the set of equations \[ae-bf = ae+bg \\ af+be = af+bh \\ cg-dh = ce+dg \\ ch+dg = cf+dh\]

Which reduce to \[-f = g \\ e = h \\ cg-dh = ce+dg \\ ch+dg = cf+dh\]

However if \(c=0\) (the left hand matrix is upper triangular) we would have that \(-h = g , g = h\), which is only satisfied by \(g=h=0\)... Therefore from the top two equations, also \(f=e=0\), then \(a,b\) and \(d\) can take any value as the equations are independent of these values.

The operaration here is uncoupled so could make a strange change such as defining a new product which is \[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \# \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae-dh & af+dg \\ cg-bf & ch+be \end{bmatrix}\]

Term by term deliberatly picking 0 and 1\[{\begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix}}\#{\begin{bmatrix} b_{00} & b_{01} \\ b_{10} & b_{11} \end{bmatrix}} = {\begin{bmatrix} a_{00}b_{00}-a_{11}b_{11} & a_{00}b_{01}+a_{11}b_{10} \\ a_{10}b_{10}-a_{01}b_{01} & a_{10}b_{11}+a_{01}b_{00} \end{bmatrix}}\]

After a fair amount of thinking this has a general formula \[a\#b=M_{ij}=\sum_{k=0}^1 (-1)^{\delta^k_1\delta^j_0}a_{i \oplus k,k}b_{i \oplus k,j \oplus k}\] where \(\oplus\) is the logical XOR operation.

We have the following results \[\begin{bmatrix}1&0\\0&0\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}1&1\\0&0\end{bmatrix}\\ \begin{bmatrix}0&1\\0&0\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}0&0\\-1&1\end{bmatrix}\\ \begin{bmatrix}0&0\\1&0\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}0&0\\1&1\end{bmatrix}\\ \begin{bmatrix}0&0\\0&1\end{bmatrix}\#{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}=\begin{bmatrix}-1&1\\0&0\end{bmatrix}\\\] The relationship does not commute \[{\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}1&0\\0&0\end{bmatrix}=\begin{bmatrix}1&0\\0&1\end{bmatrix}=I\\ {\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}0&1\\-1&0\end{bmatrix}=i\sigma_y\\ {\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}0&0\\1&0\end{bmatrix}=\begin{bmatrix}0&1\\1&0\end{bmatrix}=\sigma_x\\ {\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\#\begin{bmatrix}0&0\\0&1\end{bmatrix}=\begin{bmatrix}-1&0\\0&1\end{bmatrix}=-\sigma_z\\\]

Which are an interesting basis.

The transform appears to follow the relationship \[1\#\begin{bmatrix}a&b\\c&d\end{bmatrix}=1\#\begin{bmatrix}a&0\\0&0\end{bmatrix}+1\#\begin{bmatrix}0&b\\0&0\end{bmatrix}+1\#\begin{bmatrix}0&0\\c&0\end{bmatrix}+1\#\begin{bmatrix}0&0\\0&d\end{bmatrix}\]

also

\[\begin{bmatrix}1&1\\1&1\end{bmatrix}\#\begin{bmatrix}a&0\\0&0\end{bmatrix}=\begin{bmatrix}a&a\\a&a\end{bmatrix}\#\begin{bmatrix}1&0\\0&0\end{bmatrix}=a(\begin{bmatrix}1&1\\1&1\end{bmatrix}\#\begin{bmatrix}1&0\\0&0\end{bmatrix})\]

The operation has a left identity \[\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}\#\begin{bmatrix}a&b\\c&d\end{bmatrix}=\begin{bmatrix}a&b\\c&d\end{bmatrix}\]

However it is not possible to define a right identity! Using the same matrix above will keep the elements the same bu swap those in the right hand column. Most likely a strange artifact of the way we came to the transform.

We can state however that \[\bigg( {\begin{bmatrix} a & b \\ c & d \end{bmatrix}} \# {\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}\bigg) \# {\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}} = {\begin{bmatrix} a & b \\ c & d \end{bmatrix}}\]

This is strange! as \[{\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}} \# {\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}} = {\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}\]

So the operation cannot be associative! For some element \(\iota \equiv 1\) we then have that \(\iota a = (a \iota) \iota \ne a\iota\).

We can define a left inverse, for some \(A\#B=C\) such that \((A^{-1}\#A)\#B=A^{-1}\#C=B\). For such a system we solve for the \(a,b,c,d\) such that \[\begin{bmatrix} e & 0 & 0 &-h \\ f & 0 & 0 & g \\ 0 & -f & g & 0 \\ 0 & e & h & 0 \end{bmatrix}\begin{bmatrix}a \\ b \\ c \\ d \end{bmatrix}=\begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix}\] Which results in \[\begin{bmatrix}a \\ b \\ c \\ d \end{bmatrix}=\frac{1}{eg+fh}\begin{bmatrix}g&h&0&0\\0&0&-h&g\\0&0&e&f\\-f&e&0&0\end{bmatrix} \begin{bmatrix}1 \\ 0 \\ 1 \\ 0 \end{bmatrix}=\frac{1}{eg+fh}\begin{bmatrix}g\\-h\\e\\-f\end{bmatrix}\]

This gives the general formula for the \(\#\) inverse of a 2 by 2 matrix as \[\begin{bmatrix} a&b\\c&d\end{bmatrix}^{-1}=\frac{1}{ac+bd}\begin{bmatrix}c & -d \\ a & -b \end{bmatrix}\]

This is fondly reminiscent of the steps taken to invert a regular matrix under matrix multiplication, exept exchanges are of rows rather than on diagonal terms and two negatives in a different place. Also there will still exist \(\#\) singular matrices when \(ac=-bd\). This would make such a quantity the \(#\) determinant of the matrix...

Interesting question, if a matrix is normally singular \(ad-bc=0\), can it also be \(\#\) singular, \(ac+bd=0\)? There can be if \(a\) and \(b\) are both \(0\), or for certain combinations otherwise, some with complex entries...

A desirable scenario would now be a regular equation of the form \[Ax=b \\ \Upsilon\# Ax=\Upsilon\# b\]

If we can use an \(\Upsilon\) that is the \(\#\) inverse of \(A\), then we know \[x=\Upsilon\# b\]

However it is not yet clear how to define the \(\#\) operation to a vector. This can be explored, we must find the general expression for a matrix to matrix \(\#\) operation. In the mean time we can set up a scenario we know the solution to, attempting to reverse engineer the result! \[\begin{bmatrix}n&0\\0&n\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}nx_1\\nx_2\end{bmatrix}\]

We know the \(\#\) inverse of the identity matrix,

\[\frac{1}{n}\begin{bmatrix}1&0\\1&0\end{bmatrix}\#\begin{bmatrix}n&0\\0&n\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=\frac{1}{n}\begin{bmatrix}1&0\\1&0\end{bmatrix}\#\begin{bmatrix}nx_1\\nx_2\end{bmatrix}\]

The factors of \(n\) will cancel, however this reveals to us that the left identity is still an identity for a vector. This is fairly obvious, but now known for sure. A less easy example would be

\[\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}ax_1+bx_2\\cx_1+dx_2\end{bmatrix}\]

We can use our \(\#\) inverse already found to give \[\bigg(\frac{1}{ac+bd}\begin{bmatrix}c & -d \\ a & -b \end{bmatrix}\#\begin{bmatrix}a&b\\c&d\end{bmatrix}\bigg)\begin{bmatrix}x_1\\x_2\end{bmatrix}=\frac{1}{ac+bd}\begin{bmatrix}c & -d \\ a & -b \end{bmatrix}\#\begin{bmatrix}ax_1+bx_2\\cx_1+dx_2\end{bmatrix}\\ {\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}\begin{bmatrix}x_1\\x_2\end{bmatrix}=\frac{1}{ac+bd}\begin{bmatrix}c & -d \\ a & -b \end{bmatrix}\#\begin{bmatrix}ax_1+bx_2\\cx_1+dx_2\end{bmatrix} \\ \begin{bmatrix}x_1\\x_1\end{bmatrix}=\frac{1}{ac+bd}\begin{bmatrix}c & -d \\ a & -b \end{bmatrix}\#\begin{bmatrix}ax_1+bx_2\\cx_1+dx_2\end{bmatrix} \\\]

We can define the concept of a “cross inverse” such that for any \(A.B=C\) there is a \(X^{-1}\#A.B=X^{-1}\#C\) such that \(X^{-1}\#A=I\). Like before but with a minor column change \[\begin{bmatrix} e & 0 & 0 &-h \\ f & 0 & 0 & g \\ 0 & -f & g & 0 \\ 0 & e & h & 0 \end{bmatrix}\begin{bmatrix}a \\ b \\ c \\ d \end{bmatrix}=\begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix}\] Which results in \[\begin{bmatrix}a \\ b \\ c \\ d \end{bmatrix}=\frac{1}{eg+fh}\begin{bmatrix}g&h&0&0\\0&0&-h&g\\0&0&e&f\\-f&e&0&0\end{bmatrix} \begin{bmatrix}1 \\ 0 \\ 0 \\ 1 \end{bmatrix}=\frac{1}{eg+fh}\begin{bmatrix}g\\g\\f\\-f\end{bmatrix}\]

This gives the general formula for the ’cross inverse’ of a 2 by 2 matrix as \[{\begin{bmatrix} a & b \\ c & d \end{bmatrix}}^{-1}=\frac{1}{ac+bd}{\begin{bmatrix} c & c \\ b & -b \end{bmatrix}}\]

Which is very curious as it only relies on all of the elements weakly in the form of a coefficient. Even more odd is the fact that for the idenity matrix, whose cross inverse is clearly the hash identity, using the above formula leads to a definition of an undefined quantity, \[\frac{1}{0+0}{\begin{bmatrix} 0 & 0 \\ 0 & -0 \end{bmatrix}} = {\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}\]

An Example

\[{\begin{bmatrix} 3 & 2 \\ 1 & 1 \end{bmatrix}}{\begin{bmatrix} 3 \\ 2 \end{bmatrix}}={\begin{bmatrix} 13 \\ 5 \end{bmatrix}} \\ \frac{1}{5}{\begin{bmatrix} 1 & 1 \\ 2 & -2 \end{bmatrix}}\#{\begin{bmatrix} 3 & 2 \\ 1 & 1 \end{bmatrix}}{\begin{bmatrix} 3 \\ 2 \end{bmatrix}}=\frac{1}{5}{\begin{bmatrix} 1 & 1 \\ 2 & -2 \end{bmatrix}}\#{\begin{bmatrix} 13 \\ 5 \end{bmatrix}} \\ {\begin{bmatrix} 3 \\ 2 \end{bmatrix}}=\frac{1}{5}{\begin{bmatrix} 1 & 1 \\ 2 & -2 \end{bmatrix}}\#{\begin{bmatrix} 13 \\ 5 \end{bmatrix}}\]

To figure out how this works we must solve for the operation that provides \[\frac{1}{ac+bd}{\begin{bmatrix} c & c \\ b & -b \end{bmatrix}}\#{\begin{bmatrix} ax_1+bx_2 \\ cx_1+dx_2 \end{bmatrix}}={\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}\]

However, this seems subject to certain constraints, given the equation \[{\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} x_1 \\ x_1 \end{bmatrix}} \\ {\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}}\#{\begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}}\#{\begin{bmatrix} x_1 \\ x_1 \end{bmatrix}} \\ {\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}}\#{\begin{bmatrix} x_1 \\ x_1 \end{bmatrix}}\]

Which is just a statement that the last equation is the (non-existant) inverse transform of the first... Unless, x_2 can be obtained from some kind of combination of the x_1 value...

But really, the matrix used in the first place has a normal determinant of zero. So undertaking the transform at this stage is bound to fail.

So similarly we can use a matrix whic has both a non zero ddeterminant and hash determinant, thus cross determinant. \[{\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} x_1 \\ x_1+x_2 \end{bmatrix}} \\ {\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}}\#{\begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}}\#{\begin{bmatrix} x_1 \\ x_1+x_2 \end{bmatrix}} \\ {\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}}\#{\begin{bmatrix} x_1 \\ x_1+x_2 \end{bmatrix}}\]

A legitimate transfrom at this stage would be \[A\#v=u_{i} \\ u_1=A_{11}v_1 \\ u_2=A_{1x}v_2-A_{1x}v_1\]

where \(x\) is either 1 or 2. (This is confusing, should rewrite as 0,1 if used before?). We can find the opposing symmetry, to add information about the transform... \[{\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} x_1+x_2 \\ x_2 \end{bmatrix}} \\ {\begin{bmatrix} 0 & 0 \\ 1 & -1 \end{bmatrix}}\#{\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} 0 & 0 \\ 1 & -1 \end{bmatrix}}\#{\begin{bmatrix} x_1+x_2 \\ x_2 \end{bmatrix}} \\ {\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} 0 & 0 \\ 1 & -1 \end{bmatrix}}\#{\begin{bmatrix} x_1+x_2 \\ x_2 \end{bmatrix}}\]

Another legitimate transfrom at this stage would be \[A\#v=u_{i} \\ u_1=A_{21}v_1 + A_{22}v_2 \\ u_2=A_{21}v_2\]

At this stage one might consider approaching a matrix like \({\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}}\)... However, the normal determinant is zero! When we get the resulting transformed vector both elements will equal \(x_1+x_2\) and are indistinguishable, there is no way to know the order \(x_1\) and \(x_2\) should be sorted back! (NB. Think about the relavance with quantum indistinguishability here...). Instead consider the non-singular matrix \[{\begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}={\begin{bmatrix} x_1+x_2 \\ x_2 \end{bmatrix}} \\ \frac{1}{3}{\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}}\#{\begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}}{\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}=\frac{1}{3}{\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}}\#{\begin{bmatrix} 2x_1+x_2 \\ x_1+x_2 \end{bmatrix}} \\ {\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}}=\frac{1}{3}{\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}}\#{\begin{bmatrix} 2x_1+x_2 \\ x_1+x_2 \end{bmatrix}}\]

Using an amalgumation of our rule set at this stage gives \[u_1=A_{1x}v_1+A_{21}v_1+A_{22}v_2 \\ u_2=A_{21}v_2+A_{1x}v_2+A_{1x}v_1\]

Which when applied, using the convenient fact that \(A_{11}=A_{12}=A_{1x}\), always from the definition of the inverse matrix... (Although may not be the case for an arbitray transform if this is even possible!) \[u_1=\frac{1}{3}(2x_1+x_2+2x_1+x_2-x_1-x_2)\\ u_2=\frac{1}{3}(x_1+x_2+x_1+x_2-2x_1-x_2)\\ \\ u_1=\frac{1}{3}(3x_1+x_2)\\ u_2=\frac{1}{3}(x_2)\\\]

So there are clearly some additional terms missing, if the transform is well definied.

We have \(x_2=\frac{1}{3}(x_2+4(x_1+x_2)-2(2x_1+x_2))\) and \(x_1=\frac{1}{3}(3x_1+x_2+1(2x_1+x_2)-2(x_1+x_2))\)

At this stage the result will get either unsolvable or very beautiful/intricate... We must rewrite the original two equations to give the same result while making room for the additional factors required...