ROUGH DRAFT authorea.com/8768
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • Vector Stacking Operation and Poltors (Polygonic Vector)

    Abstract

    Investigate the results of a vector stacking operator in analogy to the matrix multiplication or inner and outer vector products.

    Introduction

    We have vector dot product \[\begin{bmatrix}a&b\end{bmatrix}\cdot\begin{bmatrix}c\\d\end{bmatrix} =ac+bd\]

    and outer ’dyadic’ of this product by reversing the position of row and column vectors

    \[\begin{bmatrix} a \\ b \end{bmatrix} \cdot \begin{bmatrix} c & d \end{bmatrix} = \begin{bmatrix} ac & ad \\ bc & bd \end{bmatrix}\]

    So investigate coupling of row and column vectors such that \[\begin{bmatrix} a \\ b \end{bmatrix} \cdot \begin{bmatrix} c \\ d \end{bmatrix} = \begin{bmatrix} ac \\ ad \\ bc \\ bd \end{bmatrix} or \begin{bmatrix} ac \\ bc \\ ad \\ bd \end{bmatrix}\]

    Higher matricies could be made with a transform such as \[M=(v\cdot u)\otimes(v^T\cdot u^T)\]

    With this concept in mind let us examine the matrix muliplication operation \[\begin{bmatrix} a & b \\ c & d \end{bmatrix} \cdot \begin{bmatrix} e & f \\ g & h \end{bmatrix} = \begin{bmatrix} ae+bg & af+bh \\ ce+dg & cf+dh \end{bmatrix} = \begin{bmatrix} \begin{bmatrix}a&b\end{bmatrix}\cdot\begin{bmatrix}e\\g\end{bmatrix} & \begin{bmatrix}a&b\end{bmatrix}\cdot\begin{bmatrix}f\\h\end{bmatrix} \\ \begin{bmatrix}c&d\end{bmatrix}\cdot\begin{bmatrix}e\\g\end{bmatrix} & \begin{bmatrix}c&d\end{bmatrix}\cdot\begin{bmatrix}f\\h\end{bmatrix} \end{bmatrix}\]

    With this concept in mind let us define a similar operation by switching row and column vectors \[\begin{bmatrix}a&b\\c&d\end{bmatrix}\otimes \begin{bmatrix}e&f\\g&h\end{bmatrix}=\begin{bmatrix} \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix}\\ \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix} \end{bmatrix}= \begin{bmatrix} ae & ag & af & ah \\ be & bg & bf & bh \\ ce & cg & cf & ch \\ de & dg & df & dh \end{bmatrix}\]

    This appears to be the outer product of two vectors of the form \[(a b c d) \otimes (e g f h)\]

    It can be seen that the trace of each sub matrix is the result of a regular matrix-matrix product!

    However this differs from the Kronecker product which would yeild \[\begin{bmatrix}a&b\\c&d\end{bmatrix}\otimes \begin{bmatrix}e&f\\g&h\end{bmatrix}=\begin{bmatrix} \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}a\\b\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix}\\ \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}e&g\end{bmatrix}& \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}f&h\end{bmatrix} \end{bmatrix}= \begin{bmatrix} ae & af & be & bf \\ ag & ah & bg & bh \\ ce & cf & de & df \\ cg & ch & dg & dh \end{bmatrix}\]

    Poltors

    New idea, using old concepts. Statement: “In 3 dimensions, x is as much to y as it is to z”, repeating for other permutations of x,y,z. So why do we seperate them in the three vector, x has to jump across y to get to z, they seem distant but they are actually neightbors.

    Humans like to make squares and lines, lets try a triangle like structure and see where it goes. This isn’t easy to do in vanilla Latex though, define a three vector \(a\) with elements \((a_1, a_2, a_3)\) as \[/a_1 \backslash\\ /a_2 a_3\backslash\]

    For symmetry reasons we can adress the elements in a cyclic manner fitting to permutations. Then for example one can take the dot product as normal \[/a_1 \backslash \cdot/b_1\backslash \hspace{95pt}\\ /a_2 a_3\backslash /b_2 b_3\backslash = a_1b_1+a_2b_2+a_3b_3\]

    The cross product makes a different sense in this format \[\hspace{67pt}/a_1 \backslash \times/b_1\backslash \hspace{45pt} (a_2b_3-a_3b_2)\hspace{95pt}\\ /a_2 a_3\backslash \hspace{5pt} /b_2 b_3\backslash = (a_3b_1-a_1b_3) (a_1b_2-a_2b_1)\]

    Where the triangle like frame is beginning to be ommited. However the result on the right hand side as a new three vector can now be explained by the sentence: “the element at a position is the anti-clockwise element of LH operand times clockwise element of RH operand minus clockwise element of LH operand times anti-clockwise element of RH operand”. Bit of a mouthfull, but there is order here. Actually, if we define the index as having a modulo count on, such that elements 4 and 1 are equivalent then we can state the cross product as \[(a \times b)_i = a_{i+1}b_{i-1}-a_{i-1}b_{i+1}\]

    Which is an alternate definition. Much more concise, only uses one index.

    Perhaps these triangles are a stepping stone onto, 4-vectors. If there is a corner for each element in the construct, that that would natrually make the 4-vector equivalent a 2x2 matrix lookalike. But, we still keep our rotational definition of the cross product. \[\begin{bmatrix}a_1 & a_4 \\a_2 & a_3\end{bmatrix} \times \begin{bmatrix}b_1 & b_4 \\b_2 & b_3\end{bmatrix} = \begin{bmatrix}a_2b_4 - a_4b_2 & a_1b_3-a_3b_1\\a_3b_1-a_1b_3 & a_4b_2 - a_2b_4\end{bmatrix}\]

    Determinants of Higher Order

    The usual matrix determinant is for a square matrix, can be defined by a number of formulae either by permutation symbols or cofactor matrices (Lagrange expansion). Consider the 2x2 matrix determinant defined as \[det(A)=a_{11}a_{22}-a_{12}a_{21}\]

    Then consider a similar expression for a rank 4 ’4-cubic’ tensor of size 2x2x2x2, we can flatten the tensors elements into a 4x4 block matrix keeping the adresses in format \(i,j,k,l=1,2\), then the determinant analogue of the tensor with be a 2x2 matrix, not a scalar. If it still holds the same information that a system of equations has a unique solution etc. then, we need the scaled analogue of a vector to operator on with the 2x2x2x2 tensor.

    Then in the form of matrix Equation \[\sum_{j} T_{ijkl}V_{jkl}=U_{ikl}\]

    It is even possible to form equations like \[\sum_{j} T_{ijkl}v_{j}=U_{ikl}\]

    Although the solutions will depend on the result block vector U containing only matrix elements posible with a linear combination of the matrices in the array T. For example a solvable equation would be \[\begin{bmatrix} \begin{bmatrix}1&1\\0&0\end{bmatrix}& \begin{bmatrix}0&1\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\1&0\end{bmatrix}& \begin{bmatrix}0&0\\1&1\end{bmatrix} \end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix} = \begin{bmatrix} \begin{bmatrix}1&2\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\2&1\end{bmatrix} \end{bmatrix}\]

    where the solution is simply \(x_1=1,x_2=1\). However the sucess of a gaussian elimination at least creating a row echelon form depends on the matrices subtracting from each other also, and would be impossible in this case. A more general equation would be \[\begin{bmatrix} \begin{bmatrix}1&1\\0&0\end{bmatrix}& \begin{bmatrix}0&1\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\1&0\end{bmatrix}& \begin{bmatrix}0&0\\1&1\end{bmatrix} \end{bmatrix} \begin{bmatrix} \begin{bmatrix}x_{111}&x_{112}\\x_{121}&x_{122}\end{bmatrix}\\ \begin{bmatrix}x_{211}&x_{212}\\x_{221}&x_{222}\end{bmatrix} \end{bmatrix} = \begin{bmatrix} \begin{bmatrix}1&2\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\2&1\end{bmatrix} \end{bmatrix}\]

    This reduces to the matrix equation \[\begin{bmatrix} 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} x_{111} \\ x_{112} \\ x_{121} \\ x_{122} \\ x_{211} \\ x_{212} \\ x_{221} \\ x_{222} \end{bmatrix} = \begin{bmatrix} 1\\2\\0\\1\\1\\0\\2\\1 \end{bmatrix}\]

    Using Gaussian Elimination (by computer) the resulting vector \(x\) is \((1,0,0,1,1,0,0,1)\) Which corresponds to the solution of two identity matrices in the block vector.

    The ordering of the column vecotrs has been chosen as an unravelling of the rank three tensors above. The determinant of this matrix is 1, thus it is unimodular, and is integer invertible, although the elements need not be integers, so any non-singular matrix will do. There exists a solution, which we knew as by inserting the identity matrix into the general problem we would retrieve an effective solution of the vector transform.

    ..Insert solution

    Approach the full general solution \[\begin{bmatrix} \begin{bmatrix}1&1\\0&0\end{bmatrix}& \begin{bmatrix}0&1\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\1&0\end{bmatrix}& \begin{bmatrix}0&0\\1&1\end{bmatrix} \end{bmatrix} \begin{bmatrix}x_1 \\ x_2\end{bmatrix} = \begin{bmatrix} \begin{bmatrix}1&2\\0&1\end{bmatrix}\\ \begin{bmatrix}1&0\\2&1\end{bmatrix} \end{bmatrix}\]

    where the solution is simply \(x_1=1,x_2=1\). However the sucess of a gaussian elimination at least creating a row echelon form depends on the matrices subtracting from each other also, and would be impossible in this case. A more general equation would be \[\begin{bmatrix} \begin{bmatrix}a_{1111}&a_{1112}\\a_{1121}&a_{1122}\end{bmatrix}& \begin{bmatrix}a_{1211}&a_{1212}\\a_{1221}&a_{1222}\end{bmatrix}\\ \begin{bmatrix}a_{2111}&a_{2112}\\a_{2121}&a_{2122}\end{bmatrix}& \begin{bmatrix}a_{2211}&a_{2212}\\a_{2221}&a_{2222}\end{bmatrix} \end{bmatrix} \begin{bmatrix} \begin{bmatrix}x_{111}&x_{112}\\x_{121}&x_{122}\end{bmatrix}\\ \begin{bmatrix}x_{211}&x_{212}\\x_{221}&x_{222}\end{bmatrix} \end{bmatrix} = \begin{bmatrix} \begin{bmatrix}c_{111}&c_{112}\\c_{121}&c_{122}\end{bmatrix}\\ \begin{bmatrix}c_{211}&c_{212}\\c_{221}&c_{222}\end{bmatrix} \end{bmatrix}\]

    This reduces to the matrix \[\begin{bmatrix} a_{1111} & 0 & a_{1112} & 0 & a_{1211} & 0 & a_{1212} & 0 \\ 0 & a_{1111} & 0 & a_{1112} & 0 & a_{1211} & 0 & a_{1212} \\ a_{1121} & 0 & a_{1122} & 0 & a_{1221} & 0 & a_{1222} & 0 \\ 0 & a_{1121} & 0 & a_{1122} & 0 & a_{1221} & 0 & a_{1222} \\ a_{2111} & 0 & a_{2112} & 0 & a_{2211} & 0 & a_{2212} & 0 \\ 0 & a_{2111} & 0 & a_{2112} & 0 & a_{2211} & 0 & a_{2212} \\ a_{2121} & 0 & a_{2122} & 0 & a_{2221} & 0 & a_{2222} & 0 \\ 0 & a_{2121} & 0 & a_{2122} & 0 & a_{2221} & 0 & a_{2222} \end{bmatrix} \begin{bmatrix} x_{111} \\ x_{112} \\ x_{121} \\ x_{122} \\ x_{211} \\ x_{212} \\ x_{221} \\ x_{222} \end{bmatrix} = \begin{bmatrix} c_{111} \\ c_{112} \\ c_{121} \\ c_{122} \\ c_{211} \\ c_{212} \\ c_{221} \\ c_{222} \end{bmatrix}\]

    Removing the zero padding via block matrix notation again \[\begin{bmatrix} a_{1111}I & a_{1112}I & a_{1211}I & a_{1212}I \\ a_{1121}I & a_{1122}I & a_{1221}I & a_{1222}I \\ a_{2111}I & a_{2112}I & a_{2211}I & a_{2212}I \\ a_{2121}I & a_{2122}I & a_{2221}I & a_{2222}I \\ \end{bmatrix} \begin{bmatrix} x_{111} \\ x_{112} \\ x_{121} \\ x_{122} \\ x_{211} \\ x_{212} \\ x_{221} \\ x_{222} \end{bmatrix} = \begin{bmatrix} c_{111} \\ c_{112} \\ c_{121} \\ c_{122} \\ c_{211} \\ c_{212} \\ c_{221} \\ c_{222} \end{bmatrix}\]

    By taking the matrix equiavlent of the determinant to yeild a matrix for the general result here we will yeild a matrix determinant \[\begin{bmatrix} a_{1111}a_{2211}+a_{1112}a_{2221}-a_{1211}a_{2111}-a_{1212}a_{2121} & a_{1111}a_{2212}+a_{1112}a_{2222}-a_{1211}a_{2112}-a_{1212}a_{2122} \\ a_{1121}a_{2211}+a_{1122}a_{2221}-a_{1221}a_{2111}-a_{1222}a_{2121} & a_{1121}a_{2212}+a_{1122}a_{2222}-a_{1221}a_{2112}-a_{1222}a_{2122} \end{bmatrix}\]

    By further taking the actual determinant of this matrix determinat we get the equation \[(a_{1111}a_{2211}+a_{1112}a_{2221}-a_{1211}a_{2111}-a_{1212}a_{2121})(a_{1121}a_{2211}+a_{1122}a_{2221}-a_{1221}a_{2111}-a_{1222}a_{2121})-\\(a_{1111}a_{2212}+a_{1112}a_{2222}-a_{1211}a_{2112}-a_{1212}a_{2122})(a_{1121}a_{2212}+a_{1122}a_{2222}-a_{1221}a_{2112}-a_{1222}a_{2122})\]

    This will result in 32 terms, however if 8 cancel it could be enough to equal the 24 terms of a 4x4 determinant... Let’s see. For a given matrix a,b,c,d... till 16 terms. The dterminant will be \[a f k p- a f l o- a g j p+ a g l n+\\ a h j o- a h k n- b e k p+ b e l o+\\ b g i p- b g l m- b h i o+ b h k m+\\ c e j p- c e l n- c f i p+ c f l m+\\ c h i n- c h j m- d e j o+ d e k n+\\ d f i o- d f k m- d g i n+ d g j m\]

    Changing the above expansion into the same notation \[(ak+bo-ci-dm)(ek+fo-gi-hm)-\\(al+bp-cj-dn)(el+fp-gj-hn)\]

    Gives \[a e k^2- a e l^2+ a f k o- a f l p-\\ a g i k+ a g j l- a h k m+ a h l n+\\ b e k o- b e l p+ b f o^2- b f p^2-\\ b g i o+ b g j p- b h m o+ b h n p-\\ c e i k+ c e j l- c f i o+ c f j p+\\ c g i^2- c g j^2+ c h i m- c h j n-\\ d e k m+ d e l n- d f m o+ d f n p+\\ d g i m- d g j n+ d h m^2- d h n^2\]

    Can also write the matrix before as \[\begin{bmatrix} a_{1111} & a_{1112} & a_{1211} & a_{1212} & 0 & 0 & 0 & 0 \\ a_{1121} & a_{1122} & a_{1221} & a_{1222} & 0 & 0 & 0 & 0 \\ a_{2111} & a_{2112} & a_{2211} & a_{2212} & 0 & 0 & 0 & 0 \\ a_{2121} & a_{2122} & a_{2221} & a_{2222} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & a_{1111} & a_{1112} & a_{1211} & a_{1212} \\ 0 & 0 & 0 & 0 & a_{1121} & a_{1122} & a_{1221} & a_{1222} \\ 0 & 0 & 0 & 0 & a_{2111} & a_{2112} & a_{2211} & a_{2212} \\ 0 & 0 & 0 & 0 & a_{2121} & a_{2122} & a_{2221} & a_{2222} \end{bmatrix} \begin{bmatrix} x_{111} \\ x_{121} \\ x_{211} \\ x_{221} \\ x_{112}\\ x_{122}\\ x_{212}\\ x_{222} \end{bmatrix} = \begin{bmatrix} c_{111} \\ c_{121} \\ c_{211} \\ c_{221} \\ c_{112}\\ c_{122} \\ c_{212}\\ c_{222} \end{bmatrix}\]

    Which is the direct sum of A with itself \[(A \oplus A)x=c\]