A Binary Representation Transform



A recreational transform is investigated. It involves taking a vector of binary numbers, extrapolating the vector into a binary matrix, transposing the matrix and repacking the into a different vector which can have a different number of elements for non-square matrices. This allows for sets of transformations from higher to lower dimensional spaces to be noted.


For some \(n \in \mathbb{N}^0\), we can represent the number as a binary polynomial of finite order \(q=\lfloor\mathrm{log}_2 n \rfloor_+\) \[n=\sum_{k}^{q} a_k2^k\]

with the, \(a_k \in {0,1}\) and \[\lfloor x \rfloor_+ := \Bigg\{\begin{matrix} \lfloor x \rfloor ,& x>0\\0,&x\le0 \end{matrix}\] Then performing a map of the coefficients to some row vector \(v \in \mathbb{R^q}\) will produce a construct of the form \[n \to v:= \begin{bmatrix} a_1,a_2,\cdots,a_q \end{bmatrix}\]

Thus, a column vector consisting of elements \(n_i, \; i \in {1,2,..r}\) will map to a matrix of the form \[M:= \begin{bmatrix} a_{11} & a_{12} & ... & a_{1q} \\ a_{21} & a_{22} & ... & a_{2q} \\ \vdots & \vdots & & \vdots \\ a_{r1} & a_{r2} & ... & a_{rq} \\ \end{bmatrix}\]

Which will have a transpose \[M^T:= \begin{bmatrix} a_{11} & a_{21} & ... & a_{r1} \\ a_{12} & a_{22} & ... & a_{r2} \\ \vdots & \vdots & & \vdots \\ a_{1q} & a_{2q} & ... & a_{rq} \\ \end{bmatrix}\]

Some vectors are invariant under such a transform for example \[n_i= \begin{bmatrix}1 \\2 \end{bmatrix} \to \mathrm{bin}[n_i]= \begin{bmatrix}0 & 1 \\1 & 0 \end{bmatrix} =\mathrm{bin}[n_i]^T,\]

where the transpose of the binary matrix is itself. Then, vectors that lead to a symmetric matrix are invariant under the transformation (unless the nature of the vectorization is defined such that a column vector is packed into a row vector).

Table of 2 vector content from 0 to 3.

. 0 1 2 3
0 [0,0] [0,1] (1,0) (1,1)
1 (0,2) (0,3) [1,2] [1,3]
2 [2,0] [2,1] (3,0) <3,1>
3 (2,2) <1,3> [3,2] [3,3]

So somewhat confusing, not so exciting. [] maps to itself i.e invariant, () forms a pair and <> forms a strange connection. But, this technique can be used to map higher dimensional vectors to lower ones... now exciting. For example. \[\begin{bmatrix}4 \\ 5\end{bmatrix} \to \begin{bmatrix}1&0&0 \\ 1&0&1\end{bmatrix} \to \begin{bmatrix}1&1\\0&0\\0&1\end{bmatrix} \to \begin{bmatrix}3\\0\\1\end{bmatrix}\]

The additional information added to the picture from a higher bit rate of vector encompasses a lower bit picture in a higher dimension. Realistically there can be no invariants in a dimensional change unless the vector gains an additional zero element and does not change. There is the dimensional transform that is ’one bit’ simpler. That is in 2D, vectors with elements of either 1 or 0 can be mapped to a scalar.

In fact for any vector with elements 0 or 1, in any dimension the result can be directly mapped to a scalar, for example \[\begin{bmatrix}1\\0\\0\\1\\1\end{bmatrix} \to \begin{bmatrix}1&0&0&1&1\end{bmatrix} \to10011\to19\]

In this space, to travel in “direction 19” would then be to move along the vector as above. This adds a simple interpretation of dimensionality to counting. Each number is a unique direction which is a combination of one or more of the basis axes 1,2,4,8... So, binary counting has a representation on large dimensional spaces.

If 1 is the vector (1,0) and 2 is the vector (0,1) then 3 =1+2 = (1,1), true. Verify for products. require 2x3=6 say, then (0,1,0)x(1,1,0)=(0,1,1) but also enforce commutation (1,1,0)x(0,1,0)=(0,1,1).

Guess at the moment for two numbers \(axb\) do \(bwNOT(a bwXOR b)\)