Vector Stacking Operation and Poltors (Polygonic Vector)

Abstract

Investigate the results of a vector stacking operator in analogy to the matrix multiplication or inner and outer vector products.

Introduction

We have vector dot product \[\begin{bmatrix}a&b\end{bmatrix}\cdot\begin{bmatrix}c\\d\end{bmatrix} =ac+bd\]

and outer ’dyadic’ of this product by reversing the position of row and column vectors

\[\begin{bmatrix} a \\ b \end{bmatrix} \cdot \begin{bmatrix} c & d \end{bmatrix} = \begin{bmatrix} ac & ad \\ bc & bd \end{bmatrix}\]

So investigate coupling of row and column vectors such that \[\begin{bmatrix} a \\ b \end{bmatrix} \cdot \begin{bmatrix} c \\ d \end{bmatrix} = \begin{bmatrix} ac \\ ad \\ bc \\ bd \end{bmatrix} or \begin{bmatrix} ac \\ bc \\ ad \\ bd \end{bmatrix}\]

Higher matricies could be made with a transform such as \[M=(v\cdot u)\otimes(v^T\cdot u^T)\]

With this concept in mind let us examine the matrix muliplication operation \[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae+bg & af+bh \\ ce+dg & cf+dh \end{bmatrix} = \begin{bmatrix} \begin{bmatrix}a&b\end{bmatrix}\cdot\begin{bmatrix}e\\g\end{bmatrix} & \begin{bmatrix}a&b\end{bmatrix}\cdot\begin{bmatrix}f\\h\end{bmatrix} \\ \begin{bmatrix}c&d\end{bmatrix}\cdot\begin{bmatrix}e\\g\end{bmatrix} & \begin{bmatrix}c&d\end{bmatrix}\cdot\begin{bmatrix}f\\h\end{bmatrix} \end{bmatrix}\]

With this concept in mind let us define a similar operation by switching row and column vectors \[\begin{bmatrix}a&b\\c&d\end{bmatrix}\otimes \begin{bmatrix}e&f\\g&h\end{bmatrix}=\begin{bmatrix} \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix}\\ \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix} \end{bmatrix}= \begin{bmatrix} ae & ag & af & ah \\ be & bg & bf & bh \\ ce & cg & cf & ch \\ de & dg & df & dh \end{bmatrix}\]

This appears to be the outer product of two vectors of the form \[(a b c d) \otimes (e g f h)\]

It can be seen that the trace of each sub matrix is the result of a regular matrix-matrix product!

However this differs from the Kronecker product which would yeild \[\begin{bmatrix}a&b\\c&d\end{bmatrix}\otimes \begin{bmatrix}e&f\\g&h\end{bmatrix}=\begin{bmatrix} \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix}\\ \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix} \end{bmatrix}= \begin{bmatrix} ae & af & be & bf \\ ag & ah & bg & bh \\ ce & cf & de & df \\ cg & ch & dg & dh \end{bmatrix}\]

Poltors

New idea, using old concepts. Statement: “In 3 dimensions, x is as much to y as it is to z”, repeating for other permutations of x,y,z. So why do we seperate them in the three vector, x has to jump across y to get to z, they seem distant but they are actually neightbors.

Humans like to make squares and lines, lets try a triangle like structure and see where it goes. This isn’t easy to do in vanilla Latex though, define a three vector \(a\) with elements \((a_1, a_2, a_3)\) as \[/a_1 \backslash\\ /a_2 a_3\backslash\]

For symmetry reasons we can adress the elements in a cyclic manner fitting to permutations. Then for example one can take the dot product as normal \[/a_1 \backslash \cdot/b_1\backslash \hspace{95pt}\\ /a_2 a_3\backslash /b_2 b_3\backslash = a_1b_1+a_2b_2+a_3b_3\]

The cross product makes a different sense in this format \[\hspace{67pt}/a_1 \backslash \times/b_1\backslash \hspace{45pt} (a_2b_3-a_3b_2)\hspace{95pt}\\ /a_2 a_3\backslash \hspace{5pt} /b_2 b_3\backslash = (a_3b_1-a_1b_3) (a_1b_2-a_2b_1)\]

Where the triangle like frame is beginning to be ommited. However the result on the right hand side as a new three vector can now be explained by the sentence: “the element at a position is the anti-clockwise element of LH operand times clockwise element of RH operand minus clockwise element of LH operand times anti-clockwise element of RH operand”. Bit of a mouthfull, but there is order here. Actually, if we define the index as having a modulo count on, such that elements 4 and 1 are equivalent then we can state the cross product as \[(a \times b)_i = a_{i+1}b_{i-1}-a_{i-1}b_{i+1}\]

Which is an alternate definition. Much more concise, only uses one index.

Perhaps these triangles are a stepping stone onto, 4-vectors. If there is a corner for each element in the construct, that that would natrually make the 4-vector equivalent a 2x2 matrix lookalike. But, we still keep our rotational definition of the cross product. \[\begin{bmatrix}a_1 & a_4 \\a_2 & a_3\end{bmatrix} \times \begin{bmatrix}b_1 & b_4 \\b_2 & b_3\end{bmatrix} = \begin{bmatrix}a_2b_4 - a_4b_2 & a_1b_3-a_3b_1\\a_3b_1-a_1b_3 & a_4b_2 - a_2b_4\end{bmatrix}\]

Determinants of Higher Order

The usual matrix determinant is for a square matrix, can be defined by a number of formulae either by permutation symbols or cofactor matrices (Lagrange expansion). Consider the 2x2 matrix determinant defined as \[det(A)=a_{11}a_{22}-a_{12}a_{21}\]

Then consider a similar expression for a rank 4 ’4-cubic’ tensor of size 2x2x2x2, we can flatten the tensors elements into a 4x4 block matrix keeping the adresses in format \(i,j,k,l=1,2\), then the determinant analogue of the tensor with be a 2x2 matrix, not a scalar. If it still holds the same information that a system of equations has a unique solution etc. then, we need the scaled analogue of a vector to operator on with the 2x2x2x2 tensor.

Then in the form of matrix Equation \[\sum_{j} T_{ijkl}V_{jkl}=U_{ikl}\]

It is even possible to form equations like \[\sum_{j} T_{ijkl}v_{j}=U_{ikl}\]

Although the solutions will depend on the result block vector U containing only matrix elements posible with a linear combination of the matrices in the array T. For example a solvable equation would be \[\begin{bmatrix} \begin{bmatrix}1&1\\0&0\end{bmatrix}& \begin{bmatrix}0&1\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\1&0\end{bmatrix}& \begin{bmatrix}0&0\\1&1\end{bmatrix} \end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix} = \begin{bmatrix} \begin{bmatrix}1&2\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\2&1\end{bmatrix} \end{bmatrix}\]

where the solution is simply \(x_1=1,x_2=1\). However the sucess of a gaussian elimination at least creating a row echelon form depends on the matrices subtracting from each other also, and would be impossible in this case. A more general equation would be \[\begin{bmatrix} \begin{bmatrix}1&1\\0&0\end{bmatrix}& \begin{bmatrix}0&1\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\1&0\end{bmatrix}& \begin{bmatrix}0&0\\1&1\end{bmatrix} \end{bmatrix} \begin{bmatrix} \begin{bmatrix}x_{111}&x_{112}\\x_{121}&x_{122}\end{bmatrix}\\ \begin{bmatrix}x_{211}&x_{212}\\x_{221}&x_{222}\end{bmatrix} \end{bmatrix} = \begin{bmatrix} \begin{bmatrix}1&2\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\2&1\end{bmatrix} \end{bmatrix}\]

This reduces to the matrix equation \[\begin{bmatrix} 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} x_{111} \\ x_{112} \\ x_{121} \\ x_{122} \\ x_{211} \\ x_{212} \\ x_{221} \\ x_{222} \end{bmatrix} = \begin{bmatrix} 1\\2\\0\\1\\1\\0\\2\\1 \end{bmatrix}\]

Using Gaussian Elimination (by computer) the resulting vector \(x\) is \((1,0,0,1,1,0,0,1)\) Which corresponds to the solution of two identity matrices in the block vector.

The ordering of the column vecotrs has been chosen as an unravelling of the rank three tensors above. The determinant of this matrix is 1, thus it is unimodular, and is integer invertible, although the elements need not be integers, so any non-singular matrix will do. There exists a solution, which we knew as by inserting the identity matrix into the general problem we would retrieve an effective solution of the vector transform.

..Insert solution

Approach the full general solution \[\begin{bmatrix} \begin{bmatrix}1&1\\0&0\end{bmatrix}& \begin{bmatrix}0&1\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\1&0\end{bmatrix}& \begin{bmatrix}0&0\\1&1\end{bmatrix} \end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix} = \begin{bmatrix} \begin{bmatrix}1&2\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\2&1\end{bmatrix} \end{bmatrix}\]

where the solution is simply \(x_1=1,x_2=1\). However the sucess of a gaussian elimination at least creating a row echelon form depends on the matrices subtracting from each other also, and would be impossible in this case. A more general equation would be \[\begin{bmatrix} \begin{bmatrix}a_{1111}&a_{1112}\\a_{1121}&a_{1122}\end{bmatrix}& \begin{bmatrix}a_{1211}&a_{1212}\\a_{1221}&a_{1222}\end{bmatrix}\\ \begin{bmatrix}a_