Investigate what I percieve to be a Hilbert Space of numbers. I use the concept of a number unit, in analogy to length, area, volume etc. Such that a prime has dimensions of \(p\). Compunds and partitions are visualised.

We have an infinite set of orthogonal axes, each one of them is labelled by a prime number. Together they construct a Hilbert space. The concept of the numbers \(1\) and \(0\) are different here, and they do not have thier own axes as such. \(0\) is the centre, the origin. \(1\) appears to transcend the entire space.

We start with two axes at right angles, \(2\) and \(3\). Using the unit notation. much like a mass without kg written after it means little, we identify these as prime elements, they have a single unit \(p\), like length, but this is prime. Rewriting them we have \(2p\), and \(3p\). We cannot write \(1\) and \(0\) with this dimension. They are the identity and null elements of the space, special objects. Otherwise \(1p\times 2p=2p^2\), which is untrue.

However, \(2p\times3p=6p^2\), and we then see that \(6p^2\) is a number which has \(2\) prime factors (notice I count without the \(p\) when adressing quantity not an element of the space).

In my mind, \(6p^2\) looks like a sheet, if \(2p\) and \(3p\) were unit vectors, then this is the dyadic product of the two. However, we must not forget we are in an infinite dimensional Hilbert space.

Any number \(Np^m\) could be expressed as a particular vector. The reular representation transforms as the lines below. \[\vec{v} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k} + a_4 \hat{l} + ... \\ Np^m = a_1\hat{2p} + a_2\hat{3p} + a_3\hat{5p} + a_4 \hat{7p} + ... \\\]

This chain goes on to infinity, however, for primes, only one of the \(a_i\) will be \(1\) and all others \(0\). For other numbers, \[Np^m = \prod_{i=1}^{\infty} P_i^{a_i}p^{a_i}\]

Where \[m=\sum_{i=1}^\infty a_i\]

the number of non-distinct (degenerate) prime divisors.

If we take restricted infinite vectors, we can then identify \(6p^2\) as such a dyadic product

\[\begin{bmatrix}1 \\ 0 \\ 0 \\\vdots \end{bmatrix} \begin{bmatrix}0 & 1 & 0 & \cdots \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & \cdots \\ 0 & 0 & 0 & \cdots \\ \vdots & \vdots & \vdots & \\ \end{bmatrix}\]

In that infinite matrix, that particular element means \(6p^2\). The whole object represents \(6p^2\).

In my old version of this theory I had made numbers like \(4\) be vectors still but. Now I realise this is wrong. The numbers are \(4p^2\) and cannot be vectors. We see that for a given number \(Np^m\), the object required to represent that number is a rank-\(m\) array! Horrible.

Thus we also see each element has the representation of \(P\otimes P\) where \(P\) are infinite prime vectors and the dyadic product is taken between the two.

Orthogonality:

The primes are orthogonal, and make up the unit vectors/basis vectors of this space.

Thus \(2p^1 \cdot 3p^1=0\). and more generally \(ap^1 \cdot bp^1= \delta_{ab}\), for \(a,b \in \mathbb{P}\), the primes.

This can be written as a dot product of our infinite vectors.

However the concept of orthogonality may generalise to larger, non-prime numbers. This requires an insight/change to our theory.

Consider the dyadic forms of the numbers \(2p^15p^1 = 10p^2\) and \(5p^17p^1=35p^2\). We may take their product below \[\begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\]

The result is not equal to zero. (the dyadic zero \(0p^2\) if you like). In fact the result is \(14p^2\).

Let us review the procedure: We have some operation \(\odot\) such that \[(2p\cdot5p) \odot (5p \cdot 7p) = (2p \cdot 7p)\]

The \(5p\) term appears to have contracted between the two. Brilliant. We can contract again, but this time with a prime as a vector from the left. \[2p \odot (2p \cdot 7p) = 7p \\ \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & 1 \end{bmatrix}\]

Or from the right \[(2p \cdot 7p) \odot 7p = 2p \\ \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}\] And then of course both at once giving \[2p \odot (2p \cdot 7p) \odot 7p = 1\]

This concept appears to be the division by a factor. The non-commutivity of the algrbra is a little concerning, however there is always the symmetric version by transposing the matricies. We notice that is some number is represented by a matrix \(Np^2\), then \(N^Tp^2\) represents the same number. Much in the same way, the column and row vectors with prime indicative elements act the same way in each orientation.

This is exciting, there may be a generalisation to cubic structures that will help to develop these ideas further.

We should also be able to write curious expressions such as \[(2p\cdot 3p) \odot (3p \cdot 5p) \odot (5p \cdot 7p) = 2p \cdot 7p\]

which is in analogy to \[2\cdot\frac{3}{3}\cdot\frac{5}{5}\cdot7=14\]

I note here that if \[(2p\cdot 3p) + (3p \cdot 5p) + (5p \cdot 7p) + \cdots= A\]

Then \[A= \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & \cdots \\ 0 & 0 & 1 & 0 & 0 & \cdots \\ 0 & 0 & 0 & 1 & 0 & \cdots \\ 0 & 0 & 0 & 0 & 1 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\ \end{bmatrix}\]

Then I note that \[\sum_{k=1}^\infty \sqrt{k}(p_k \cdot p_{k+1}) = a \\ \sum_{k=2}^\infty \sqrt{k}(p_{k+1} \cdot p_{k}) = a^\dagger\]

Where \(a\) and \(a^\dagger\) are the bosonic quantum creation and annihilation operators.

If we insisted that \[[a,a^\dagger]=1\]

We would invoke quantum field like rules. Thus we have \[\sum_{k=1}^\infty \sqrt{k}(p_{k} \cdot p_{k+1})\odot\sum_{k'=2}^\infty \sqrt{k'}(p_{k'+1} \cdot p_{k'}) - \sum_{k=2}^\infty \sqrt{k}(p_{k+1} \cdot p_{k})\odot\sum_{k'=1}^\infty \sqrt{k'}(p_{k'} \cdot p_{k'+1}) = I_\infty\]

In this representation. But we may also notice that \[\sum_{k=1}^\infty (p_k \cdot p_k) = I_\infty\]

And thus the two expressions must be equal!

Some scope for things such as \[2p + 2p = 4p^2 \\ \\ \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \begin{bmatrix} 1 & 0 \end{bmatrix} \to \begin{bmatrix} \begin{bmatrix} 1 & 0 \end{bmatrix} \\ 0 \end{bmatrix} \to \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}\]

But this system appears to break down in many circumstances. For this particular example, one could continue to add \(2p\) and generate \(2^np^n\) like structures of rank \(n\) quite comfortably.

Obviously this is the overlap of product and multiplication, as this is still the dyadic product. something else is needed, some other operation.