deletions | additions
diff --git a/section_User_psychology_Programming_languages__.tex b/section_User_psychology_Programming_languages__.tex
index fb4fa62..80f3b43 100644
--- a/section_User_psychology_Programming_languages__.tex
+++ b/section_User_psychology_Programming_languages__.tex
...
The outer product shows up in \cite[Sec. 2.24]{Householder1953}, although Householder does not give the quantity $u v^T$ a special name \footnote{while \cite[Sec. 2.03]{Householder1953} introduces ``outer products'' $[u v]$ , these quantities are known today as bivectors and are conventionally denoted $u \wedge v$.}, and the discussion in context clearly implies that Householder considers vector-scalar-vector products $u \sigma v^T$ special cases of matrix products $U S V^T$.
Interstingly, Interestingly, \cite{Householder1953} does not use the terms ``bilinear form'', ``quadratic form'', or ``Hermitian form''. He uses ``scalar product''. \cite{Householder1955} clearly spells out his notational convention. (\cite{Householder1953} has a missing page which might also spell it out, but it's not clear.)
...
\[
\phi\rho = (\phi|x_1)\alpha_1 + (\phi|x_2)\alpha_2 + \dots + (\phi|x_\omega)\alpha
\]
Furthermore, Taber reviews C S Peirce's ``forms'' or ``vids'' $(\alpha_i : \alpha_j)$, and says they are equivalent to ``elementary units'' of a matrix. It is clear
fhat that Peirce's notation corresponds to $a_j a_i^T$ today.
He introduces Clifford's ``quadrates'' (which are just matrices).
\begin{quote}
I shall therefore call an algebra linear in $\omega^2$ of these vids a quadrate algebra of order $\omega$; and any expression linear in the vids, a \textit{quadrate form}.
...
Vectors are $\alpha$, matrices are $A = [a]^m_n$
inner product is $\alpha \cdot \beta$, synonymous with dot product, scalar product, direct product.
outer product and forms absent.
in Ch. 5, matrices, vectors are introduces as
constitutents constituents of matrices
\S 5.4 p 36 - Matrices as ordered sets of row vectors (primarily) or column vectors (secondarily).
\S 5.5 p 37 $A = [a]^1_n = (a_1, a_2, \dots, a_n)$ is a row matrix,
$A = [a]^m_1 = \{a_1, a_2, \dots, a_m \}$ is a column matrix. (Also appears vertically)
...
Vector spaces come in Ch. 2. Vectors as $n$-tuples. are small Greek letters.
\begin{quote}
Thus ordered $n$-tuples will be regarded here as objects which may be represented as rows, or columns, whichever may be
convenitn convenient at the moment.
p. 24, \S2-2.
\end{quote}
...
--- \cite[\S 2.3.1, pp. 2-8--9]{Lucas1968}
\end{quote}
Secnodary Secondary reference – Knuth,
\url{http://bitsavers.informatik.uni-stuttgart.de/pdf/stanford/cs_techReports/STAN-CS-76-562_EarlyDevelPgmgLang_Aug76.pdf}
'
Plankalkul – Vektoren
...
\url{http://deepblue.lib.umich.edu/bitstream/handle/2027.42/3966/bab9692.0001.001.pdf?sequence=5&isAllowed=y}
Rutishauser – subscripted variables with
calculatble calculable offsets
Brooker, 1955: variables with indices vn1...vn5
\url{http://iopscience.iop.org.libproxy.mit.edu/article/10.1088/0508-3443/6/9/302/pdf}
...
IT 1956 : JACKPOT1!!!!!!!!!!!!!!!
\url{http://delivery.acm.org.libproxy.mit.edu/10.1145/810000/808963/p114-chipps.pdf?ip=18.9.61.111&id=808963&acc=ACTIVE%20SERVICE&key=7777116298C9657D%2EDE5F786C30E1A3B4%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&CFID=574921315&CFTOKEN=61166642&__acm__=1452667765_f7f48bfe2bce87dbf76b28e0a9f84cf0}
scalars, vectors, matrices and explicit indexing formulas
x is multiply, * is
exponentiate exponentiation
“indicial instruction machines” - 1959
...
\section{Classficiation \section{Classification of the four main schools of vectors and matrices}
\begin{tabular}{l}
Mostly matrices \\
diff --git a/section_reallanguages.tex b/section_reallanguages.tex
index 408f481..7bd9fdb 100644
--- a/section_reallanguages.tex
+++ b/section_reallanguages.tex
...
R's semantics of vectors and matrices are simple to reason about.
Perhaps the only behavior that some users may find surprising is that
transposition is not idempotent on vectors.
Nevertheless, we've already seen
precendent precedent in the mathematical literature
that does...
\todo{cite}
diff --git a/subsection_The_contributions_of_Grassmann__.tex b/subsection_The_contributions_of_Grassmann__.tex
index 1f143dd..97a6f18 100644
--- a/subsection_The_contributions_of_Grassmann__.tex
+++ b/subsection_The_contributions_of_Grassmann__.tex
...
Gibbs, however, recognized the value of the general case, bequeathing it the name ``indeterminate product''.~\footnote{Some of the English literature incorrectly attribute the name to Grassmann; however, it is quite clear from \textit{Multiple Algebra} that the name is his invention. The closest phrase used by Grassmann is ,,ein beliebiges Produkt $P_{a_1, a_2, ..., a_n}$`` (``an arbitrary product''~\cite[p. 196, \S 353]{Grassmann2000}), which is not defined in a formal, technical sense.} If we interpret this as using an unsymmetric Gramian $G$ and placing a new unique basis vector in each entry of $G$, then we have the basic ingredients of a tensor product.
Grassmann's other contribution to this topic is the concept of ,,offne Produkt`` or ``open product''~\footnote{Somewhat confusingly, Grassmann's 1862 book defines open products, or ``product[s] with n
\{interchangable\} \{interchangeable\} openings''~\cite[\S\S 353, p. 196]{Grassmann2000}, in a way which we would recognize today as symmetric tensors of rank $n$.}, which he writes in Ausdehnungslehre 1844, Sec 172, p 267 (English pp 271-2) with the notation $[A() . B]$, acting on a vector $P$ by
\[
[A() . B] P = AP . B,