A Binary Representation Transform



A recreational transform is investigated. It involves taking a vector of binary numbers, extrapolating the vector into a binary matrix, transposing the matrix and repacking the into a different vector which can have a different number of elements for non-square matrices. This allows for sets of transformations from higher to lower dimensional spaces to be noted.


For some \(n \in \mathbb{N}^0\), we can represent the number as a binary polynomial of finite order \(q=\lfloor\mathrm{log}_2 n \rfloor_+\) \[n=\sum_{k}^{q} a_k2^k\]

with the, \(a_k \in {0,1}\) and \[\lfloor x \rfloor_+ := \Bigg\{\begin{matrix} \lfloor x \rfloor ,& x>0\\0,&x\le0 \end{matrix}\] Then performing a map of the coefficients to some row vector \(v \in \mathbb{R^q}\) will produce a construct of the form \[n \to v:= \begin{bmatrix} a_1,a_2,\cdots,a_q \end{bmatrix}\]

Thus, a column vector consisting of elements \(n_i, \; i \in {1,2,..r}\) will map to a matrix of the form \[M:= \begin{bmatrix} a_{11} & a_{12} & ... & a_{1q} \\ a_{21} & a_{22} & ... & a_{2q} \\ \vdots & \vdots & & \vdots \\ a_{r1} & a_{r2} & ... & a_{rq} \\ \end{bmatrix}\]

Which will have a transpose \[M^T:= \begin{bmatrix} a_{11} & a_{21} & ... & a_{r1} \\ a_{12} & a_{22} & ... & a_{r2} \\ \vdots & \vdots & & \vdots \\ a_{1q} & a_{2q} & ... & a_{rq} \\ \end{bmatrix}\]

Some vectors are invariant under such a transform for example \[n_i= \begin{bmatrix}1 \\2 \end{bmatrix} \to \mathrm{bin}[n_i]= \begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix} =\mathrm{bin}[n_i]^T,\]

where the transpose of the binary matrix is itself. Then, vectors that lead to a symmetric matrix are invariant under the transformation (unless the nature of the vectorization is defined such that a column vector is packed into a row vector).

Table of 2 vector content from 0 to 3.

. 0 1 2 3
0 [0,0] [0,1] (1,0) (1,1)
1 (0,2) (0,3) [1,2] [1,3]
2 [2,0] [2,1] (3,0) <3,1>
3 (2,2) <1,3> [3,2] [3,3]

So somewhat confusing, not so exciting. [] maps to itself i.e invariant, () forms a pair and <> forms a strange connection. But, this technique can be used to map higher dimensional vectors to lower ones... now exciting. For example. \[\begin{bmatrix}4 \\ 5\end{bmatrix} \to \begin{bmatrix}1&0&0 \\ 1&0&1\end{bmatrix} \to \begin{bmatrix}1&1\\0&0\\0&1\end{bmatrix} \to \begin{bmatrix}3\\0\\1\end{bmatrix}\]

The additional information added to the picture from a higher bit rate of vector encompasses a lower bit picture in a higher dimension. Realistically there can be no invariants in a dimensional change unless the vector gains an additional zero element and does not change. There is the dimensional transform that is ’one bit’ simpler. That is in 2D, vectors with elements of either 1 or 0 can be mapped to a scalar.

In fact for any vector with elements 0 or 1, in any dimension the result can be directly mapped to a scalar, for example \[\begin{bmatrix}1\\0\\0\\1\\1\end{bmatrix} \to \begin{bmatrix}1&0&0&1&1\end{bmatrix} \to10011\to19\]

In this space, to travel in “direction 19” would then be to move along the vector as above. This adds a simple interpretation of dimensionality to counting. Each number is a unique direction which is a combination of one or more of the basis axes 1,2,4,8... So, binary counting has a representation on large dimensional spaces.

If 1 is the vector (1,0) and 2 is the vector (0,1) then 3 =1+2 = (1,1), true. Verify for products. require 2x3=6 say, then (0,1,0)x(1,1,0)=(0,1,1) but also enforce commutation (1,1,0)x(0,1,0)=(0,1,1).

Guess at the moment for two numbers \(axb\) do \(bwNOT(a bwXOR b)\)

2D\(\to\)2D Vector Results

We can take all input vectors with elements either \(0,1,2\) and observe all output vectors: \[\begin{bmatrix}0\\0\end{bmatrix}\to \begin{bmatrix}0\\0\end{bmatrix}\to \begin{bmatrix}0&0\end{bmatrix}\to 0 \equiv \begin{bmatrix} 0 \\ 0 \end{bmatrix}\\ \begin{bmatrix}1\\0\end{bmatrix}\to \begin{bmatrix}1\\0\end{bmatrix}\to \begin{bmatrix}1&0\end{bmatrix}\to 2 \equiv \begin{bmatrix} 2 \\ 0 \end{bmatrix}\\ \begin{bmatrix}0\\1\end{bmatrix}\to \begin{bmatrix}0\\1\end{bmatrix}\to \begin{bmatrix}0&1\end{bmatrix}\to 1 \equiv \begin{bmatrix} 1 \\ 0 \end{bmatrix}\\ \begin{bmatrix}1\\1\end{bmatrix}\to \begin{bmatrix}1\\1\end{bmatrix}\to \begin{bmatrix}1&1\end{bmatrix}\to 3 \equiv \begin{bmatrix} 3 \\ 0 \end{bmatrix}\\ \begin{bmatrix}2\\0\end{bmatrix}\to \begin{bmatrix}1&0\\0&0\end{bmatrix}\to \begin{bmatrix}1&0\\0&0\end{bmatrix}\to \begin{bmatrix}2\\0\end{bmatrix}\\ \begin{bmatrix}2\\1\end{bmatrix}\to \begin{bmatrix}1&0\\0&1\end{bmatrix}\to \begin{bmatrix}1&0\\0&1\end{bmatrix}\to \begin{bmatrix}2\\1\end{bmatrix}\\ \begin{bmatrix}2\\2\end{bmatrix}\to \begin{bmatrix}1&0\\1&0\end{bmatrix}\to \begin{bmatrix}1&1\\0&0\end{bmatrix}\to \begin{bmatrix}3\\0\end{bmatrix}\\ \begin{bmatrix}1\\2\end{bmatrix}\to \begin{bmatrix}0&1\\1&0\end{bmatrix}\to \begin{bmatrix}0&1\\1&0\end{bmatrix}\to \begin{bmatrix}1\\2\end{bmatrix}\\ \begin{bmatrix}0\\2\end{bmatrix}\to \begin{bmatrix}0&0\\1&0\end{bmatrix}\to \begin{bmatrix}0&1\\0&0\end{bmatrix}\to \begin{bmatrix}1\\0\end{bmatrix}\\\]

It is clear that they do not obey the unary analogy to a group, as after such a transformation, there are elements which were not in the original set, nameley \((3,0)\).

If applied to a 2D shape represented by this matrix \[\begin{bmatrix} a&b&c \\d&e&f\\g&h&i \end{bmatrix}\]

we get the transformed matrix \[\begin{bmatrix} a & \cdot & \cdot \\ bc &\cdot & f \\ dg & h & \cdot \\ ei & \cdot & \cdot \end{bmatrix}\]

This means, any original matrix without any anti-diagonal elements (\(c=e=g=0\)), preserves a one to one mapping. Or any matrix without half of the skew elements (\(b=d=i=0\)), also preserves a one to one mapping. If we add \(3\) as well, to complete the set of two digit binary numbers \[\begin{bmatrix}0\\3\end{bmatrix}\to \begin{bmatrix}0&0\\1&1\end{bmatrix}\to \begin{bmatrix}0&1\\0&1\end{bmatrix}\to \begin{bmatrix}1\\1\end{bmatrix}\\ \begin{bmatrix}1\\3\end{bmatrix}\to \begin{bmatrix}0&1\\1&1\end{bmatrix}\to \begin{bmatrix}0&1\\1&1\end{bmatrix}\to \begin{bmatrix}1\\3\end{bmatrix}\\ \begin{bmatrix}2\\3\end{bmatrix}\to \begin{bmatrix}1&0\\1&1\end{bmatrix}\to \begin{bmatrix}1&1\\0&1\end{bmatrix}\to \begin{bmatrix}3\\1\end{bmatrix}\\ \begin{bmatrix}3\\3\end{bmatrix}\to \begin{bmatrix}1&1\\1&1\end{bmatrix}\to \begin{bmatrix}1&1\\1&1\end{bmatrix}\to \begin{bmatrix}3\\3\end{bmatrix}\\ \begin{bmatrix}3\\2\end{bmatrix}\to \begin{bmatrix}1&1\\1&0\end{bmatrix}\to \begin{bmatrix}1&1\\1&0\end{bmatrix}\to \begin{bmatrix}3\\2\end{bmatrix}\\ \begin{bmatrix}3\\1\end{bmatrix}\to \begin{bmatrix}1&1\\0&1\end{bmatrix}\to \begin{bmatrix}1&0\\1&1\end{bmatrix}\to \begin{bmatrix}2\\3\end{bmatrix}\\ \begin{bmatrix}3\\0\end{bmatrix}\to \begin{bmatrix}1&1\\0&0\end{bmatrix}\to \begin{bmatrix}1&0\\1&0\end{bmatrix}\to \begin{bmatrix}2\\2\end{bmatrix}\\\]

The vectors filled with \(0,1,2,3\) do fully contain all mappings. However, there are still many to one mappings. If we tabulate the results, a great deal of symmetry can be seen \[\begin{bmatrix} 00&10&10&11\\20&30&12&13\\20&21&30&31\\22&23&32&33 \end{bmatrix}\]

We see each corner is a double value. A line between any of the two corners only contains values from any of those two corners. There are repeated values from the \(00\to11,00\to22,00\to33\) corners, and flipped values from \(11\to22,11\to33,22\to33\) transitions.

Alternative Interpretation

An important oversight of the above section is the arbitrary addition of the zero row, to the bottom of the vector, when the initial vector maps to a scalar.

We can change the first \(4\) results (corresponding to all binary numbers with two or less digits)

\[\begin{bmatrix}0\\0\end{bmatrix}\to \begin{bmatrix}0\\0\end{bmatrix}\to \begin{bmatrix}0&0\end{bmatrix}\to 0 \equiv \begin{bmatrix} 0 \\ 0 \end{bmatrix}\\ \begin{bmatrix}1\\0\end{bmatrix}\to \begin{bmatrix}1\\0\end{bmatrix}\to \begin{bmatrix}1&0\end{bmatrix}\to 2 \equiv \begin{bmatrix} 0 \\ 2 \end{bmatrix}\\ \begin{bmatrix}0\\1\end{bmatrix}\to \begin{bmatrix}0\\1\end{bmatrix}\to \begin{bmatrix}0&1\end{bmatrix}\to 1 \equiv \begin{bmatrix} 0 \\ 1 \end{bmatrix}\\ \begin{bmatrix}1\\1\end{bmatrix}\to \begin{bmatrix}1\\1\end{bmatrix}\to \begin{bmatrix}1&1\end{bmatrix}\to 3 \equiv \begin{bmatrix} 0 \\ 3 \end{bmatrix}\\\]

This removes the many to one mappings, and adds further symmetry to the final table of results

\[\begin{bmatrix} 00&01&10&11\\02&03&12&13\\20&21&30&31\\22&23&32&33 \end{bmatrix}\]

There is now a huge level of symmetry. Making a transition from \(aa\to bb\), where \(a<b\), will give inbetween stages \(aa\to ab \to ba \to bb\). Then with the knowledge that the table corners fill \(00,11,22,33\) left to right top to bottom, that is sufficient information to recreate the whole table. Thus we now have a transform for a \(4\times4\) matrix as \[M \to T(M) \equiv \begin{bmatrix} m_{00} & m_{01} & m_{02} & m_{03} \\ m_{10} & m_{11} & m_{12} & m_{13} \\ m_{20} & m_{21} & m_{22} & m_{23} \\ m_{30} & m_{31} & m_{32} & m_{33} \end{bmatrix} \to \begin{bmatrix} m_{00} & m_{01} & m_{10} & m_{11} \\ m_{02} & m_{03} & m_{12} & m_{13} \\ m_{20} & m_{21} & m_{30} & m_{31} \\ m_{22} & m_{23} & m_{32} & m_{33} \end{bmatrix}\]

Which appears to be the transform, “Take the row vectors of the matrix, and form block matrices”, and place in the obvious order. Some applications of this transform \[T(I_4)=\begin{bmatrix} 1&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&1 \end{bmatrix} \\ T(P^{(1)}_4)=\begin{bmatrix} 0&1&0&0\\0&0&1&0\\0&0&1&0\\0&1&0&0 \end{bmatrix} \\\]

This could potentially be used to transform singular matrices into non-singular matrices, find a determinant and the transfer back again. Further investigation is required on whether solutions are conserved under inverse transform, that is if in general \[\mathbf{A}\cdot\mathbf{B}=\mathbf{C} \\ T(T(\mathbf{A})\cdot T(\mathbf{B}))=\mathbf{C}\\\]

This can immediately be verified to not be true, but an interesting result occurs. Using the above transformed identity matrix \[\begin{bmatrix} 1&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&1 \end{bmatrix} \cdot \begin{bmatrix} 1&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&1